Re: gmc associations



James Smith <j-smith@physics.tamu.edu> writes:

> >   a) make assumptions based upon the filename - 1 I/O per directory.
> >      Very fast.  Good for FTP/HTTP filesystems.
> >   b) open up the file and read it's data and then make assumptions
> >      based upon magic numbers and so forth (ie "file" or LibDataType).
> >      usually only 1 I/O per file, so not as fast as a).  Usually fast
> >      enough for local and LAN filesystems.
> >   c) Keep data associated with each file containing the type of data
> >      of the file (c.f. Macintosh).  This is out of scope of GNOME and
> >   d) Assume a), open the file and check for file type suggested by a)
> >      first, otherwise use b).  1 I/O per file.
> >   e) Use technique a) or d), based upon user preferences.  You might
> I like (e) - but on fast systems with lots of memory (like mine at home
> and rainbow at work), once a particular area is accessed on a disk, it
> can pretty well be assumed the surrounding areas will be in memory long
> enough to do whatever reading is needed.  I would like to see a good
> default setting for this, but have the ability to fine tune it through a
> control panel - haven't thought of the details yet...
> 
> Maybe a matrix:
>               filename    magic    smart
>    Local
>    NFS
>    Network
>       FTP
>       HTTP
>       etc.
> 
>    with settings for smart (as in option d above), dumb (as in options a
> or b), and preferred order with an optional timeout before going to the
> other.  I know this sounds a bit like (e), but I think it's a little
> more flexible.  Maybe even have a system default that the sysadmin could
> tune to the system.

I would say that your answer is the same in principle as e), but with
greater flexibility of the ways in which to specify the options.  Of
course, I'm not a coder for GNOME (yet), but I'm sure the libvfs
people would like this suggestion (I think this is the most
appropriate place to put this stuff).

<off-topic severity=MILD>
The fact that a machine is large and has lots of memory is a pretty
moot argument - when you open up each file in a directory and read it,
that's an I/O to find the inode in the inode table, and then another
I/O for the block lookup, then another I/O to read the file.  With a
big machine with lots of memory and good caching you will usually get
hits with the first two I/O's, but the third is only ever successful
if you have recently read the file.  Most unix file systems are
unlikely to store sequential files in a directory sequentially on the
disk, so read-ahead caching is more of a performance hit than a gain.
</off-topic>

<off-topic>
> > Another problem we could have is people using their home directories
> over a network.  We have a central file server (rainbow) which serves
> out the home directories via NFS to canary and pine (both colors). 
> canary and pine (and rainbow) are RH5.1 boxes, but canary and pine are
> ``smart'' X-terminals which run the programs locally with the home
> directories served from rainbow.  Rainbow manages dumb X-terminals such
> as NCDs.  It would be nice to have several different set ups that could
> take into account what machine the person is using to access their
> files.  One setting might be better for pine and canary than for
> rainbow.
> (Don't ask why we set it up like this - we just did.)

If you use NIS, use the autofs daemon and automount maps.  It's the
way to go.  Then your home directory is always /home/username or
/home/machine/username (depending on how you choose to set it up).
</off-topic>
-- 
Sam Vilain, sam@whoever.com         work: sam.vilain@unisys.com
http://www.hydro.gen.nz                home: sam@hydro.gen.nz



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]