Re: [gamin] New inotify backend checked in



On Thu, Aug 04, 2005 at 02:47:10PM -0400, John McCutchan wrote:
> USER: gam_server blocks on inotify_device_fd
> 
> KERNEL: event is queued on inotify_device_fd
> 
> USER: gam_server wakes up, sees that only 1 event has been queued,
> decides to go back to sleep

  Hum, sounds an heuristic, what is your criteria ? 
If that event is that a file was created on the Desktop, why do you delay,
how long do you delay ?

> First off, I don't think gamin is the right choice for this kind of work
> load. And you wouldn't need flow control if you were talking directly to
> inotify, since you can just ask for only IN_CLOSE_WRITE events. Also,
> I'd like to see how the gam_server (using inotify) handles this kind of
> load. I have a feeling that the performance would be better than
> expected.

  The only expectation is the comparison with flow control as present in
current released version. I don't think performances would be better than
user level based flow control.

> >   I think this applies to an awful lot of server usage (NNTP, SMTP,
> > even HTTP to regenerate from modified templates), I think if you were
> > to switch beagle to gamin you would have to either extend the API or
> > add flow control at the user level, otherwise the kernel is just 
> > gonna drop events. 
> 
> Beagle doesn't use any flow control at all. The kernel will queue up to
> 16384 events per inotify instance. That is a ton. 

  I will trade my cold (pageable) extra stat data in user space for
your hot (cache wise) kernel memory pinned events. It's a tradeoff
I'm not sure we are on the right side of that trade off.

> > Of course it's hard to benchmark correctly because
> > correctness is #1 factor. I believe first in getting the architecture
> > right, and only then benchmark and optimize, not the way around ;-)
> 
> I think that we should wait until we can find a load that causes a
> problem before we add 'fallback to poll' flow control. We have all the
> code, it is trivial to hook it back into the inotify backend. I'd just
> like to see a real case where the new backend causes a performance
> problem. 

  I can understand that, but how are you gonna seek that workload feedback ?
I't gonna take a while before people even test a kernel with inotify
for this kind of workloads.

> Besides, we can save TONS of memory by going this route. Right now
> memory is much scarcer than CPU.

  Eeek depends who you talk to, don't generalize really. And we didn't
tried to optimize the stat at all. dnotify is horrible for that because it 
forced to maintain the full tree and directory children, on inotify it
would be just a stat data per busy resource in an hash table, way cheaper !

Daniel

-- 
Daniel Veillard      | Red Hat Desktop team http://redhat.com/
veillard redhat com  | libxml GNOME XML XSLT toolkit  http://xmlsoft.org/
http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]