Re: Milestones document




> From: Havoc Pennington <hp@redhat.com>

> 
> Keep in mind that there is almost always a size/speed tradeoff. That is,
> we can almost always make things faster by using more memory - assuming
> you have the memory to use. :-) So there is some cost to making things too
> small.
> 

I have to call you on this one:

Since the early 90's, on RISC systems, and since about 1995 on Intel, your
code runs faster if it is SMALLER.

Using more memory almost always makes things SLOWER on current systems.

Memory bandwidth is, as a general rule, much more precious than instructions.
Getting that data structure into a cache line can make a major performance
difference.

Even in 1990 or so, we made the X server MUCH faster while simultaneously
reducing its memory usage in data structures to 40% of the first release
of X version 11.

Example 1:
Keith Packards new frame buffer code, which rather than unrolling
Duff's algorithm across multiple different loops (via magic ugly macros),
pulls more instructions into the inner loop, yet for most operations
is as fast or faster than the frame buffer code currently in use in
the X server.  BTW, doing this saves .5megabytes of code space.

Another example:
for alot of people with current hardware, it is probably faster for
the applications to REDRAW the window rather than rely on backing store.
Simple math shows why: on my 3dfx card, I can fill of order 300million
pixels/second. If backing store doesn't fit on off screen memory, moving
it over the PCI or AGP is much slower than doing a fill.

People need to RESET their biases on how to optimize code.  Machines
ain't what they were when 386 was king...
				- Jim



--
Jim Gettys
Technology and Corporate Development
Compaq Computer Corporation
jg@pa.dec.com



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]