Re: [Evolution-hackers] oprofile & vfolders



On Tue, 2005-04-05 at 16:05 -0400, Lee Revell wrote:
On Mon, 2005-04-04 at 09:43 +0800, Not Zed wrote:
> 
> Might be some lock contention, but might not be, e.g. some hidden n^n
> algorithms or something.  Although that symbol is inaccurate, no
> g_async queues are used in the mail code at all.
> 

Are you sure the symbol is inaccurate?  The rest of them look plausible.
It appears to be something internal to the glib hash table
implementation.

glib hash tables don't use locks and definitely don't use async queues.  it is probably just a mis-guessed symbol from the profiler.

Here's the profile with all the symbols.  It looks like an expensive
manual scan of all the headers is being done, probably to sort the
message list.

I don't understand why evolution has to examine each header to generate
the "Unread Mail" vfolder, as it already has enough information about
them to sort the normal folders.

The unread vfolder (as with others) doesn't update as changes occur, otherwise your messages would vanish from the list as soon as they were marked 'read'.  So it just keeps track of which folders have changed while the folder was opened.  Then when it does a 'rebuild', it re-scans those folders for changes.  It does a refresh/rebuild when you switch to that folder.

I guess this is what is going on then, but it isn't normally terribly slow, unless there is 'lock contention' happening.  That is when two threads go through similar loops:

loop:
get a lock
  do a little bit of work
release the lock

so you end up doing only a little bit of work for each context switch.  There are several cases where this could happen, e.g. one vfolder 'syncing' its contents whilst another one is 'scanning' it.  A gdb backtrace of the multiple threads would clearly and easily show this, a profiler wont since it has no time-space relationship information.

Of course, the algorithms are (at least) linear, so many messages/large folders cause a slow-down anyway.

CPU: CPU with timer interrupt, speed 0 MHz (estimated)
Profiling through timer interrupt
samples  cum. samples  %        cum. %     image name               app name                 symbol name
40330    40330         58.4976  58.4976    libc-2.3.2.so            libc-2.3.2.so            __GI___strcasecmp
3363     43693          4.8779  63.3755    vmlinux                  vmlinux                  default_idle
1696     45389          2.4600  65.8355    libglib-2.0.so.0.600.3   libglib-2.0.so.0.600.3   IA__g_async_queue_unlock
1139     46528          1.6521  67.4876    libc-2.3.2.so            libc-2.3.2.so            __GI_strcoll
1014     47542          1.4708  68.9584    libglib-2.0.so.0.600.3   libglib-2.0.so.0.600.3   IA__g_string_truncate
986      48528          1.4302  70.3886    libcamel-1.2.so.0.0.0    libcamel-1.2.so.0.0.0    folder_scan_header
769      49297          1.1154  71.5040    libc-2.3.2.so            libc-2.3.2.so            _wordcopy_bwd_aligned
656      49953          0.9515  72.4555    libglib-2.0.so.0.600.3   libglib-2.0.so.0.600.3   IA__g_hook_unref
620      50573          0.8993  73.3548    libglib-2.0.so.0.600.3   libglib-2.0.so.0.600.3   IA__g_hash_table_find
576      51149          0.8355  74.1903    libglib-2.0.so.0.600.3   libglib-2.0.so.0.600.3   g_hash_table_foreach_remove_or_steal
574      51723          0.8326  75.0228    vmlinux                  vmlinux                  __copy_to_user_ll
555      52278          0.8050  75.8279    libc-2.3.2.so            libc-2.3.2.so            __strncasecmp
555      52833          0.8050  76.6329    libedataserver-1.2.so.4.0.0 libedataserver-1.2.so.4.0.0 e_sexp_term_eval
502      53335          0.7281  77.3610    libpthread-0.60.so       libpthread-0.60.so       __pthread_mutex_lock_internal
415      53750          0.6019  77.9630    libglib-2.0.so.0.600.3   libglib-2.0.so.0.600.3   IA__g_strsplit
402      54152          0.5831  78.5460    libc-2.3.2.so            libc-2.3.2.so            _int_malloc
382      54534          0.5541  79.1001    libc-2.3.2.so            libc-2.3.2.so            __memalign_internal
358      54892          0.5193  79.6194    libcamel-1.2.so.0.0.0    libcamel-1.2.so.0.0.0    header_append_mempool
315      55207          0.4569  80.0763    libcamel-1.2.so.0.0.0    libcamel-1.2.so.0.0.0    keys_cmp
283      55490          0.4105  80.4868    libc-2.3.2.so            libc-2.3.2.so            __realloc
282      55772          0.4090  80.8958    libc-2.3.2.so            libc-2.3.2.so            _int_realloc
275      56047          0.3989  81.2947    libc-2.3.2.so            libc-2.3.2.so            _wordcopy_fwd_aligned
258      56305          0.3742  81.6689    libedataserver-1.2.so.4.0.0 libedataserver-1.2.so.4.0.0 e_sexp_result_free
257      56562          0.3728  82.0417    libedataserver-1.2.so.4.0.0 libedataserver-1.2.so.4.0.0 e_memchunk_alloc
224      56786          0.3249  82.3666    libedataserver-1.2.so.4.0.0 libedataserver-1.2.so.4.0.0 e_mempool_alloc
224      57010          0.3249  82.6915    libglib-2.0.so.0.600.3   libglib-2.0.so.0.600.3   IA__g_hook_insert_before
217      57227          0.3148  83.0063    libcamel-provider-1.2.so.3.1.0 libcamel-provider-1.2.so.3.1.0 search_match_all
200      57427          0.2901  83.2963    libcamel-provider-1.2.so.3.1.0 libcamel-provider-1.2.so.3.1.0 camel_message_info_free
198      57625          0.2872  83.5835    vmlinux                  vmlinux                  ide_end_request
187      57812          0.2712  83.8548    libpthread-0.60.so       libpthread-0.60.so       __pthread_mutex_unlock_usercnt
180      57992          0.2611  84.1159    libedataserver-1.2.so.4.0.0 libedataserver-1.2.so.4.0.0 e_memchunk_alloc0
164      58156          0.2379  84.3537    libedataserver-1.2.so.4.0.0 libedataserver-1.2.so.4.0.0 e_sexp_result_new
160      58316          0.2321  84.5858    libglib-2.0.so.0.600.3   libglib-2.0.so.0.600.3   IA__g_string_assign
159      58475          0.2306  84.8164    libcamel-provider-1.2.so.3.1.0 libcamel-provider-1.2.so.3.1.0 camel_folder_search_search

> Since it takes so long, you'd be better off running it inside gdb, and
> then hitting ctrl-c while it's busy and seeing where it is.

I tried this and it was incredibly slow, I didn't have the patience for
it.  I'll try it again if I can't figure out what's going on by
profiling.

If you can't run gdb, then your system must be a bit underresourced to start with ... i'm not surprised evolution runs slowly then - it is a bit of a pig (not saying it can't be improved though).




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]