Re: PROPOSAL: GNOME Volume Manager for GNOME 2.8



--- Jamie McCracken <jamiemcc blueyonder co uk> wrote: > On Fri, 2004-06-25 at
20:21, Mike Hearn wrote:
> > On Fri, 25 Jun 2004 20:01:28 +0100, Sander Vesik wrote:
> > > So why not use a standard packing mechanism like say zip, especially as
> everyby s
> > > realistcly going to have zlib mapped anyways instead of creating a new packing
> > > format? 
> > 
> > While you can have uncompress zip files support for the format in free
> > software is not great and it doesn't deal with things like alignment
> > and so on. The packed files can just be mapped and used directly.
> 
> an uncompressed tar far would be better (and possibly faster) than zip
> as its more ubiquitous on nixes. The alignment issue is likely to have

OTOH tar is a real pain in the ass, comes in more than one flavour and there
is no ubiquitous library for handling it that everybody has. And everybody 
is going to have zlib on their gnome system and *also* already mapped into 
programs address spaced because libpng needs it. So its the only zero overhead 
option.

> negligible effect on performance as the main bottleneck is disk i/o
> (saving a dozen ns with alignment does not really compare to the dozen
> ms expended by reading the file in the first place). I dont mind Alan
> doing it his own way but I would like some tools to be able to pack and
> unpack icons myself if thats the case (if we could have File Roller
> supporting that pac formtat then that would be great for ordinary
> users).
>  

widespread tools support should be one of teh main considerations.
> 
> > 
> > One thing that does concern me is that we seem to be hacking around
> > problems with the filing system. Why isn't there any way to mark a
> > directory as "grouped" which asks the filing system to put all files in
> > that directory into a single "run" on the disk consecutively.
> 
> That could prove problematic if the directory contents change. We dont
> want to risk defragging the entire partition every time we make a change
> in a "grouped" directory (which is likely to happen if you're low on
> disk space)
> 
> >  You could
> > then ask the kernel via a new syscall or whatever to map a series of files
> > consecutively and you have something similar to the packer system except
> > you can still read/write the files using the standard APIs.
> 
> 
> > 
> > Yes I know this is probably more work and harder to get into the kernel
> > etc, but surely that's better than just creating filing-systems-in-a-file
> > because the real thing isn't up to scratch?
> 
> one big file will always be more efficient than lots of little files
> regardless. Bear in mind that we read data from the disk in 512 byte
> chunks so unless each icon file is exactly a multiple of 512 it could
> involve an extra read. For 1000 icons thats potentially 1000 extra reads
> whilst for a single file it would be only one extra read at most.
>  

Most file systems are not quite that bad, assuming the individual files are 
larger than 512 bytes.



	
	
		
___________________________________________________________ALL-NEW Yahoo! Messenger - sooooo many all-new ways to express yourself http://uk.messenger.yahoo.com



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]