Re: [Evolution] Evo memory use



On Tue, 2003-02-18 at 19:34, David Woodhouse wrote:
(oooh cute -- evo 1.2 found this even after evo 1.3 segfaulted while I
was typing it)

On Tue, 2003-02-18 at 07:30 in some random timezone that Evo doesn't
want to divulge, Not Zed wrote: 
On Fri, 2003-02-14 at 20:54, David Woodhouse wrote:
Note that when Evolution is separated from the IMAP server by a slow WAN
link, the mere fact that it thinks it's necessary to fetch the headers
for _every_ message even in the mailbox(es) you currently have open is a
bug far more severe than the memory usage.

Hmm, i'm not sure what you mean here.  It only fetches the headers that
are new.  Or if the server is changing the UIDVALIDITY field, in which
case we must refetch all headers (as per rfc2060).

But RFC2060 doesn't say you must fetch all headers. It says you must
discard the ones you have. You don't _need_ to know all the headers
unless you're doing local sorting, or unless the user is grabbing the
scrollbar and moving slowly down the folder index, making you actually
display them all.

And as below, the application doesn't know about 'new messages' until
they are actually avaiable in the list.  So it can't make a long
scrollbar of empty nodes, and load on demand.  It could with some api
changes to camel, but it wouldn't really work too well at reducing
latency, because the same mechanism would have to be used to build a
threaded list or do sorting, which would need all the values anyway.

But you know what I mean because you address it below -- maybe I didn't
make it clear that the two paragraphs were referring to the same thing;
sorry.

You know how many messages there are in the mailbox so you can make your
scrollbar scale correctly. You can fill in the 'unknown' entries in the 
list with some suitable placeholder text and replace that text with the
_actual_ message information if and when the relevant list entries are
actually displayed on-screen. You can even fetch the message headers for
the non-displayed messages in the background to populate your cache, as
long as there's no user-driver IMAP activity which should take
precedence -- but there's absolutely no excuse for making the user wait
while you download all the headers, before you even start to display the
list.

Actually there's a couple of reasons :)

Useful information -- thanks. 

 - updating etable has been extremely slow, and very buggy, we tried
some incremental update for local mail, and that either caused etable to
crash a lot, or if we rebuilt the tree every now and then, it was just
way too slow (it really bogs down your system).

I was thinking mostly of the unsorted case (or server-sorted) where you
could populate the tree beforehand, and just replace the _text_ in the
elements as the headers arrive, and no actual update to the structure of
the tree is necessary. Is that case just as bad? 

It doesn't actually make much difference in the unsorted case, it's
still too slow :-/  Or was, it is better in 1.4.

We can't build the tree until we have the headers anyway, if we're doing
threaded mode that is.

We can't do server side sorting, the abstraction simply doesn't support
it for a start, but above that, all sorting is handled by etable anyway,
unfortunately, so we'd gain nothing and lose some control over it.

Presumably the etable could be fixed if dynamic updates were actually
_used_ -- given time/motivation/etc. of course. 

The main guy working on etable moved onto more interesting things, and
anyone working on it since has had enough trouble just understanding the
code enough to fix minor bugs.  But like i said, 1.4 is significantly
faster already.

 - partially its the abstraction layers used, although there is actually
no technical reason why the current abstraction can't be used in a
different way at a lower level and implement this behaviour.

 - there's still the problem of how to display mail arriving with
sorting on, which can make it next to unusable to use the display while
new things get added.

TBH, I'd start with "don't" -- if you're doing local sorting, then just
do a full download of the headers beforehand, as we do now.

Yeah but ... again with etable doing the sorting, we dont even know if
the user has selected sorting ... damn super widgets that do everything
for you.  It just resorts itself every time tree changes require it, and
sometimes when they dont.

The main reason is the first, the code hasn't been fast enough, although
etable in 1.4 is looking significantly faster so it may become
possible.  We might even get enough bugs out of it to incrementally
build the tree rather than having to discard and rebuild from scratch
every time, which would make it easy.

Nothing is as slow as my 64K ISDN line. I've watched it chunter away to
itself for ten minutes at a time downloading headers I didn't want to
see, even after I've changed my mind and selected a different folder --
surely the etable can't be _that_ bad? :)

Some servers dont handle our requests particularly efficiently,
apparently.  Since we ask for all headers but the received ones (around
20-30% reduction in traffic).  Up until a month ago I had a 56K modem
which barely broke 33K6, to develop with ... which beats ISDN any day :)

If you're filtering new messages, it may actually download the messages
too, although that is done afterwards in another thread, from memory, 
and so gets multiplexed better over the connection.

Seriously -- sometimes I fire up PINE to read my mail while I'm waiting
for it :)

The second issue is more work to fix, although, not impossible.  We just
need to throw out 'folder updated' events during long processing, at the
low level imap engine.

Why so? If we're incrementally building the tree why does it matter if
the folder changes?

Thats the best mechanism the application has of finding out new messages
are available from the mail library.  Otherwise it has to poll the
message count, ugly.

Update events can include any combination of added/changed/removed uid
lists.

So the idea is ...

Get say 10 headers at a time, then pop them into a change list as
'added' messages, and fire that event off, for the UI to add/process,
then keep going, etc.  Although you need to pipeline the requests
otherwise you end up with even worse performance on high latency links. 
And the current code is not really able to be made to work like that
without a lot of work.

But there are other problems anyway, we want to rewrite the imap code
from scratch when we have the opportunity.  Its not a high priority
because of the resources required to complete it.





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]