Re: Disk seeks article [Re: New Optimization Section on d.g.o]
- From: Alan Cox <alan lxorguk ukuu org uk>
- To: Mark McLoughlin <markmc redhat com>
- Cc: Robert Love <rml ximian com>, Desktop Devel <desktop-devel-list gnome org>
- Subject: Re: Disk seeks article [Re: New Optimization Section on d.g.o]
- Date: Mon, 27 Sep 2004 13:24:06 +0100
On Llu, 2004-09-27 at 08:59, Mark McLoughlin wrote:
> 1) Simulate initial startup time by completely clearing the disk cache
> before taking measurements
int fd = open("/dev/hda", O_RDONLY);
ioctl(fd, BLKFLSBUF, 0);
That flushes the Linux cache but you can't really flush the drive cache.
The best you can do is to read lots of data, and be aware that the
drives are smart enough that a long linear 8Mb read will not do the
> 2) Simulate the files-scattered-across-the-disk problem
If you open a block device in 2.6 with O_DIRECT you can read sectors
and you will be doing direct to disk I/O so you will be able to measure
the time each I/O takes. Obviously there is a constant per command
overhead because of lack of pipelining here. Without O_DIRECT it will
use the cache and so you see pipelining but also the cache
> 3) Profile the disk accesses made by an application - disk seeks,
> cache misses, read times etc. etc.
sar can do some of this but not application level. You may beed to get
some kernel tweakes done to add a profile command buffer. It is easy to
do for IDE as it has no command queue 8)
> 4) Profile the disk accesses happening at login - i.e. with multiple
> applications all doing lots of disk accesses, who are the ones
> doing all the reading and who are the ones getting hammered because
> they have to wait for their turn?
A not unreasonable rule of thumb for current disks appears to be that
anything between 512 bytes and about 256K-1Mb (varies by disk) linearly
costs the same.
] [Thread Prev