On Tue, 24 Oct 2006 20:46:55 +0100, "Mark Howard" wrote: > The source code is not available. Hmm.. I thought Zack was making it available. He did mail me the code he was using for cairo with the polygon data (the day before he published the tests). > - the graphs show 3 polygons, but the sample points are for 4 > polygons. Is the fourth 50x faster with cairo? Don't know. But the 4th is significantly smaller. The vertex counts for the 4 polygons are on the order of 1000, 10000, 100000, and 30. > - The first (and supposedly worst for Gtk) polygon is a text rendered > string. Gtk uses pango for text, so would not generate large polygons > like this. Zack said "text" but it's not text from a font such as pango would use. It's the word "another" as if drawn by hand in a vector paint program or something. > - The third test has 100 000 vertices. I'd assume that such polygons > are hardly ever seen in screen output, so claiming QT as 7* faster > should really be 7* faster in a use case which virtually never exists > in time-critical areas. The 3 reported polygons probably all fall into the case of not being very common. But it is important for a renderer to be scalable to avoid performance traps, (and I don't know where the bounds for "realistic" is in the number of vertices). Another thing worth noting is that it's not just vertex count that matters. Practical tessellation algorithms have performance that is: O((n+k)log n) where n is the number of segments in the polygon and k is the number of intersections. The relationship between n and k is very data-specific and is one place where fabricating huge polygons for test cases can break down. For example, its common to have n >> k so that the effective performance is O(n log n) but if one generates a giant polygon by just randomly generating points then the probability of any two line segments intersecting goes way up (approaching k = n**2). So in that case, even a good O((n+k) log n) algorithm will end up exhibiting bad O(n**3) performance. I don't know where the 2nd and 3rd polygons came from for Zack's test, but from a quick glance they look more like random blobs than something "realistic", but who knows. The most important "cairo vs. Qt" point is the first graph which has a basically constant difference between cairo and Qt in spite of the fact that the number n is increased by an order of magnitude with each test, (and my suspicion is that k is increasing even faster, since the data in polygon 0 looks a little more orderly). This is actually surprising to me, since I didn't expect "old tessellator" in cairo to scale well, but it appears to be doing fine here. (I never have completely characterized its algorithmic performance, but I've always expected it would be worse than the logarithmic performance desired). The other _big_ thing that Zack is claiming is a linear-time approach for drawing polygons with OpenGL. I would be very interested in hearing more details about that and seeing some results showing the quality that is achieved. > - The author of the benchmarks noted in a comment that he also tested > the new cairo tessellator, but the results 'were a lot worse'. With the > application, this perhaps could be investigated. Like I said, I've got the polygon data and I told Zack I was planning to integrate it into cairo's performance test suite. I did not get any feeling that Zack's trying to hide anything here. And we both said that we're interested in talking with each other more about tessellation algorithms, etc, in the future. As for numbers, the new tessellator still needs some work, (which is what I explained to Zack and he mentioned). Numbers to come soon. -Carl
Attachment:
pgpEEZDtYCG6y.pgp
Description: PGP signature