Re: Blocking the UI from CPU-intensive gdbmi parsing



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hello Jonner,

I am really sorry to reply to this email so late. I was stuck in stupid
other stuff.

Jonathon Jongsma a écrit :
> In general, nemiver is pretty good about not blocking UI while it is
> doing processing of various things (I/O, parsing, etc).  But sometimes
> where there is a lot of data to parse, the UI blocks quite badly.  For
> example, if you debug nemiver with itself, and then open the 'open file'
> dialog (Ctrl+O), the list of source files is very long, and parsing this
> file list takes quite a few seconds.

Man, this sucks. I remember we had this problem at some point, and after
profiling we could find some hot spots to hammer on, and we ended up
reducing the parsing time to something quit reasonable. I should try and
 profile this again. Maybe some new parsing code introduced some
performance regression in this area. Who knows. Also I wonder how many
files we have to parse that'd take that long.

> During this time, the dialog is
> completely frozen (and sometimes the dialog hasn't even gotten a chance
> to draw itself fully before it becomes frozen, so it is a frozen ghost
> dialog).  In experimenting with bug
> http://bugzilla.gnome.org/show_bug.cgi?id=564294, I was toying around
> with the idea of providing a pulsing progress bar while the file list is
> being populated, but since the UI is blocked, there's no way to pulse a
> progress bar.  Is there anything we can do to improve this situation? 

I am thinking out loud here.

I think we could for instance let IDebugger provide an interface that
would notify the user about the progress of some operation.
It could look like:

IDebuggerSafePtr debugger = get_debugger_from_somewhere ();

ISomeNotifierSafePtr notifier = debugger->get_some_progress_notifier ();

notifier->pulse_signal ().connect (&on_notification_pulse);

/*void on_notification_pulse (); is declared somewhere*/

For the particular case of file list parsing, the implementation of the
notifier type would emit pulse events on each parsed file or something
like that. Unfortunately, GDB/MI doesn't let us know how many files it
is going to send us. So we can't give percentage of completion
information. Still I think we can say "hey, we are working, we aren't
just stuck".

The implementation of on_notification_pulse () can even trigger gtk UI
event loop iterations to avoid the UI from freezing, if need be. That's
hackish, but it's simple to try and to debug. To see if

> Can we farm out some of these long intensive parsing operations (e.g.
> GDBMIParser::parse_file_list()) out to worker threads?

I am not keen on doing multi threaded stuff at this level. In theory,
threading could certainly help here. But in practice, doing
multi-threaded parsing in interaction with a main loop can quickly
become really complicated and lead to instability if not crafted very
very carefully. I'd try simpler things first, and if really they don't
work I'd get into the multi-thread hammer solution.

> Can we break up the long parsing functions into smaller chunks
> and do them in idle callbacks?

This is not easy. The client sends multiple commands to GDB. Let's say
comands C(0), C(1), and C(2). For each command C(i), the client expects
a reply R(i). R(0) must be sent back to the client before R(1), and R(1)
must be sent back bedore R(2). Before sending back each R(i), IDebugger
does some processing P(i) on the result of the command C(i).

So what happens is a command C(0) is sent to gdb. It triggers as reply
from GDB. That reply is processed in P(0). As a result, a reply R(0) is
sent to the client.

So {C(0), P(0), R(0)}, {C(1), P(1), R(1)} must happen in that /order/.
The order is important here.

What you are asking for, if I understand correctly is to split each P(i)
into "small" chunk of data processing that'd be done during idle times.
I think that keeping the required order of processing and yet scheduling
the sub P(i) chunks of processing for idle times can be an quite hard
I'd say. And I am not sure we won't end up seeing that the chunks are
not small enough and that we need to cut them even more. And doing the
error trial cycles could not be easy with such a setup. But I am not sure.

I think the problem you are raising is an important one that needs
thinking and testing. Thanks for bringing this up.

And please, do not let my "no no" discourage you. I tend to see problems
everywhere :) In any case, I am really interested in trying stuff to fix
this problem, once I am a bit more freed up with what I am doing atm.

Cheers,

Dodji.

- --
Tant que l'on n'a pas la tête coupée, on peut espérer mettre un chapeau.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Remi - http://enigmail.mozdev.org

iEYEARECAAYFAknCpV0ACgkQPejI7lrem2G1CgCcCzsyCuvrqU6VFhb1Y48V2Hiv
G+MAoJN7WllVCUA3BHEx16b+GdRwxHb1
=iJi4
-----END PGP SIGNATURE-----


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]