Re: Performance issue when trashing files (backtraced to fsync)
- From: Alexander Larsson <alexl redhat com>
- To: Xavier Bestel <xavier bestel free fr>
- Cc: gtk-devel-list gnome org
- Subject: Re: Performance issue when trashing files (backtraced to fsync)
- Date: Tue, 11 Aug 2009 19:54:35 +0200
On Tue, 2009-08-11 at 17:55 +0200, Xavier Bestel wrote:
> On Tue, 2009-08-11 at 16:35 +0100, jcupitt gmail com wrote:
> > 2009/8/11 <jcupitt gmail com>:
> > > 2009/8/11 Alexander Larsson <alexl redhat com>:
> > >> Clearly we should do at least 3, which will fix this case (and other
> > >> similar tempfile cases). However, given the extremely bad performance
> > >> here we should maybe add the extra API in 2 allowing apps to avoid the
> > >> cost when needed? Its kinda ugly to expose that to users, but the
> > >> performance cost is pretty ugly too...
> > >
> > > I'm probably being stupid here, but how about putting the fsync in a
> > > timeout? Instead of calling fsync() directly, add a new thing called
> > > g_fsync_queue() which queues up an fsync 'soon'.
> > Oh ahem, I guess I'm thinking of sync() rather than fsync(). Though in
> > this case one sync() at the end of the delete would certainly be
> > faster than thousands of fsync()s.
> But more unsafe. The aim of frequent fsync() is to be sure not to lose
Yes, there is already a system wide timeout and sync, so if you don't
care about not losing data just don't do the fsync.
> I don't know how standardized the trash "protocol" is, but maybe one
> keyfile by deleted file is too much. Some kind of batching would be
Its a freedesktop standard, and furthermore the creation of the file is
used for atomicity guarantees, so hard to change. Anyway, there are not
normally that many files, if you e.g. trash a folder there will just be
one info file for all that.
] [Thread Prev