Re: Gtk+ unit tests (brainstorming)



[CC'ing Keith for a question near the end...]

On Tue, 31 Oct 2006 15:26:35 +0100 (CET), Tim Janik wrote:
> i.e. using averaging, your numbers include uninteresting outliers
> that can result from scheduling artefacts (like measuring a whole second
> for copying a single pixel), and they hide the interesting information,
> which is the fastest possible performance encountered for your test code.

If computing an average, it's obviously very important to eliminate
the slow outliers, because they will otherwise skew it radically. What
cairo-perf is currently doing for outliers is really cheesy,
(ignoring a fixed percentage of the slowest results). One thing I
started on was to do adaptive identification of outliers based on the
"> Q3 + 1.5 * IQR" rule as discussed here:

	http://en.wikipedia.org/wiki/Outlier

I didn't push that work out yet, since I got busy with other
things. In that work, I was also reporting the median instead of the
average. Surprisingly, these two changes didn't help the stability as
much as I would have hoped. But the idea of using minimum times as
suggested below sounds appealing.

By the way, the reason I started with reporting averages is probably
just because Imeausure to report some measure of the statistical
dispersion, and the measure I was most familiar with (the standard
deviation) is defined in terms of the arithmetic mean. But, to base
things on the median instead, I could simply report the "average
absolute deviation" from the median rather than the standard
deviation.

> most interesting for benchmarking and optimization however is the minimum
> time a specific operation takes, since in machine execution there is a hard
> lower limit we're interested in optimizing. and apart from performance
> clock skews, there'll never be minimum time measurement anomalies wich
> we wanted to ignore.

This is a good point and something I should add to cairo-perf. Changes
to the minimum really should indicate interesting changes so that
should form a very stable basis for cairo-perf-diff to make its
decisions.

> >            My stuff uses a single-pixel XGetImage just before starting
> > or stopping the timer.
>
> why exactly is that a good idea (and better than say XSync())?
> does the X server implement logic like globally carrying out all
> pending/enqueued drawing commands before allowing any image capturing?

My most certain answer is "That's what keithp told me to use".

Less certainly, I am under the impression that yes, the X server will
never provide a pixel result back that could be modified by
outstanding requests previously sent to the X server. I believe that
this is a protocol requirement, and as a result the XGetImage trick is
the only way to ensure that all drawing operations have completed.

Keith can you confirm or deny the rationale for this approach? Is it
really "better" than XSync?

-Carl

Attachment: pgpjEMs6UwKGi.pgp
Description: PGP signature



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]