On Tue, 18 Feb 2003 17:08:01 CST, Tim Flechtner <timf trdlnk com> said: > we have two boxes with identical modeled graphics cards: > red - > a 2.53 Ghz P4, 533 Mhz front side bus with 512 MB DDR RAM running > solaris 8 (x86) > protea - > a 2.0 Ghz P4, 400 Mhz front side bus with 512 MB SDRAM running > solaris 8 (x86) > interval red protea > 1000 0.02 0.02 > 100 0.05 0.05 > 10 0.07 1.50 > > can anyone suggest why protea's (admittedly a lower power machine) cpu > usage would shoot up so dramatically under the heavier load? I'm willing to bet that somewhere between interval=100 and interval=10, either the CPU or something else saturates, and you hit a "knee" in the performance curve - updates are showing up faster than you can dispatch them. Imagine a line at a bank waiting for a teller - if the teller is able to average handling clients as fast as they arrive, the line stays short. But even if the teller only averages 5 or 10 seconds per client slower than the arrival rate, very soon you have a *very* long line.... Probable things that bottleneck: Total CPU, speed of context switches. Possibly the bandwidth/speed of the underlying flush-to-Xserver code. Might want to run 'vmstat 5' on each machine and see what is getting pegged to 100% or other high value for interval=10.... -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech
Attachment:
pgpiszJAzuQep.pgp
Description: PGP signature