Hi Peter: Am 12.02.17 20:01 schrieb(en) Peter Bloomfield:
Or am I completely wrong here?A high-latency connection would suffer--think satellite.
Well, basically, we save a few tcp packets, and we have a better filling level (payload vs. frame) of the remaining ones. Comparing the size (and overhead) of them with the actual payload (i.e. the message streams), I must admit that I still wonder if this is really visible to the user...
On near-broadband, I can see a momentary pause between messages while the "RETR nn" command is sent, but I'd guess it's at the millisecond level.
This operations consists of (1) sending the query tcp packet (including all overhead), (2) the lookup of the message in the MDA (basically a "data base query", whatever the implementation may be), (3) sending back the data, and (4) processing in Balsa. Depending upon you provider, step (2) may actually be the dominating part. You could run hping against the server to get a /very/ rough feeling about the network vs. MDA latency (and of course you should measure the time Balsa needs). If step (2) is the slowest part, and if the server processes the "next" item without waiting for the transmission of the previous one being finished, this may actually speed up /fast/ connections. For low-bandwidth connections, the "data base access" will be much faster than the data transmission anyway.
So, I feel that it's worth keeping pipeline capability, but we definitely need a fix for this broken server.
Just a "heretic" question - are we *really* sure the server is broken, of might there be a flaw in Balsa's implementation, which works with some servers, but doesn't with others? If other MUA's (Thunderbird, Apple Mail Lookout, ...) support pipelining (do they?), my feeling is that a bug in the server would have been noticed. OTOH, RFC 2449 does not look complicated. Did you completely trace the "broken" session? If it's not encrypted (o.k., it /should/ be!), you could even analyse it in Wireshark.
It's actually quite easy to fix for this particular server: the IMPLEMENTATION is part of the CAPA response, so we can easily detect that we're talking to jpop-0.1 and ignore its PIPELINING capability. If other servers had the same problem, we could install a blacklist, but at this point it would have only a single entry, so hard-coding seems adequate. I'm thinking of adding PopHandle::does_not_pipe.
Blacklists/whitelists are *always* a bad solution IMHO. You end up maintaining those lists most of the time. And how would you get swift updates into distos?
One alternative would be to add an "Enable pipelining" checkbox to the SMTP server dialog, but its purpose would be obscure, and it's not clear how a user would know to disable it, so I'm inclined against that solution.
I think this would be better. Just don't name the technical details of the implementation, but the intended purpose. I.e. something like "Optimise for low-bandwidth connections if possible" (this was my intention of re-naming the crypto options the smtp dialogue). Side note: Actually I didn't implement smtp pipelining (RFC 2920) in my net-client lib, basically due to the considerations above, and for simpler error checking, and because the transmission is performed in background. If you feel it would be beneficial, I will add it. Cheers, Albrecht.
Attachment:
pgp6xzINu7XCQ.pgp
Description: PGP signature