New directory-scanning behaviour in CVS

Previously, when beagled started up, it recursively scanned all directories in your home directory (and other roots), created an internal ID for each directory, stored those ID's in a database, created inotify watches on each directory, etc ....

This was very expensive for large home directories - some users reported startup times of 5-10 minutes of continuous disk I/O while this process was happening. Stick a mozilla checkout in your home directory and it is quite noticable.

After this scanning process is complete, beagle then starts to slowly crawl directories and handle file events.

I have committed a change to this behaviour to CVS. The processes of scanning and crawling have been merged into one - crawling.

Crawling happens one directory at a time, with an appropriate length pause inbetween each directory. On crawl, the directory is scanned for subdirectories (which are then added to the crawling queue, if necessary), and then the contents of that directory are crawled (files are indexed if they are new or have changed, etc). inotify watches are created as soon as the directory is known.

This means that the startup expense is now almost non-existant since scanning is done on-crawl, but it also means that beagle will be blind to events from parts of the filesystem until it has finished crawling.

To reduce the effects of this blindness, beagled starts watching directories as soon as it sees them (i.e. when they are added to the crawling queue) and will happily respond to events even before they have been crawled. Similarly, beagled will immediately create watches on 1 level of subdirectories beneath each indexing root at startup.

The crawling pattern means that in general, lesser nested directories are watched/crawled first.

Any file activity in queued directories will result in that directory being bumped up the crawling queue.

I think thats everything.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]