Re: Organising an ARM performance drive?



There is another class of profiling and performance needed: you can learn lots from xscope X protocol traces... For example, I noticiced very poor Firefox performance over a 60ms RTT network popping menus up. Xscope showed a grab pointer, followed by three query pointers, followed by an ungrab. Doing stupid things fast doesn't help performane one bit....

         - Jim




On Nov 16, 2009, at 9:04 AM, Loïc Minier <lool dooz org> wrote:

On Wed, Nov 04, 2009, Dave Neary wrote:
Recently during a couple of events I attended (OSiM World, the Maemo
Summit and ELC), a number of people have signalled to me that they are
interested in working together on a performance review of the GNOME
stack on ARM. Others have indicated that they're concerned about
performance of newer GNOME developments on ARM. This smells to me like
an ideal opportunity to collaborate.

We had similar discussions around Ubuntu on ARM, wondering whether it
could perform better, if it is slower than other ARM distros etc.

Some obvious opportunities for performance checking at the GNOME level are Gstreamer, Clutter, PulseAudio, GTK+, pango and perhaps even Xorg. Suggestions for best ways to find specific performance issues which we
can then go about getting fixed are welcome.

So there are various things you can run, but I think it's important to
keep a goal in mind; for instance you can optimize for the speed of the python testsuite, or for VFP performance etc., but what you really care
about is how fast the device performs this or that task, or how much
power it uses overall.  Hence I think you first need to set some goals
before trying to "improve performances".  For instance you might want
the device to boot or to resume in a particular time, or you might want
the list of applications to show up in a particular amount of time or
the browser to startup or load web pages in a particular time.

Once you have identified the key things you care about (e.g. video
decoding using as little CPU/power as possible, or browser startup
speed), I think the best approach is to use kernel infrastructure; I've
read about nice oprofile based research, and I think it's an excellent
first step: it allows identifying which resources are constrained and
by which ELF binaries/libs.
  Then you should have a good breakdown of what the various costs come
from; it might be that pango takes too long to init, or that setting up the X client takes too long or that there is too much I/O going on etc.
At this point you should investigate how to save time in the biggest
offenders and add debug info to them or use regular profilers against
them.


The only things this doesn't cover very well are impact of toolchain
changes; one way to cover these are archive-wide rebuilds with
different flags.  I don't quite know of a better way to test things
here.  For instance, what's the impact if you build everything for
Thumb2, or with hard float etc.


  Bye
--
Loïc Minier
_______________________________________________
mobile-devel-list mailing list
mobile-devel-list gnome org
http://mail.gnome.org/mailman/listinfo/mobile-devel-list


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]