Re: dead nfs mount blocks opening other folders





Alexander Larsson wrote:
On Thu, 2004-11-11 at 16:50 +0000, Laszlo Kovacs wrote:

And here lies the problem. The queues are processed from the front and new items are added to the end. An item at the front of a queue can block the processing of all items behind it. If my nfs folder does not come back from the async call for a long time then all other items behind it will not be processed. So when I click on "/", all elements from "/" go into the high priority queue, get processed, then moved to the low priority queue and so on. /brokennfsmount does not move out from the low priority queue for a very long time. The reason is that directory_count_start() takes a very long time to run. This function registers a callback through gnome_vfs_async_load_directory() to do the counting and it takes a very long time for this callback to be called.

If I try to cd into /brokennfsmount or ls the contents it takes a long time to get the command prompt back so this is not a gnome-vfs specific problem.


Is this really the problem? Are you sure the file being in a queue
blocks other items in that queue from running? It seems to me that all
items in a queue are started in parallel, and the problem seems to be
that if the /brokennfsmount file sticks in one queue, the lower prio
queues get no cycles at all.

If you look at the loops in start_or_stop_io() you can see that if there is an element in a list and that element needs work done then that element will be processed and the rest of the list will be ignored for this cycle (there is a "return" statement there). So as long as the element is in the queue it blocks the rest of the list to be looked at.



I think that in Nautilus we should make sure that situations like this can not block processing other folders as opposed to changing things in gnome-vfs.

I don't have any obvious solution in mind yet (I don't think I know the code well enough for this). This is mostly for Alex and other interested people to understand the problem and maybe to provide some feedback (if they know this code) as to do they agree or not that what I described is a problem that needs to be fixed.

I would appreciate any feedback provided.


I'm not sure what a good solution is here. The blocking was done on
purpose so that reading the high priority data from files wasn't
hampered by reading low-prio data. Maybe we can allow a few items in the
high prio queue when we start the low-prio one? Its not really a
solution though... Maybe we can start the low prio queue when all items
on the high prio queue has run for some minimal amount of time?

What I had in mind is a mechanism where we mark an element as being processed when it is processed and if they are marked when we get to the queue again then we move to the next one. Of course there is a lot of code here so a lot of care needs to be taken to fix it properly.

Laszlo


=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Alexander Larsson Red Hat, Inc alexl redhat com alla lysator liu se He's an all-American native American farmboy looking for 'the Big One.' She's a provocative renegade Valkyrie who dreams of becoming Elvis. They fight crime!



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]