Re: Slow SMB performance over GVFS



On Mon, 2013-10-07 at 20:44 +1100, Will Rouesnel wrote:
On 07/10/13 17:20, Ross Lagerwall wrote:
On Sun, Oct 06, 2013 at 05:39:45PM +1100, Will Rouesnel wrote:
Copying with Windows 7 to my Ubuntu 13.04 file server running ZFS, I can
achieve upwards of 80mb/s transfer speeds, with a minimum of maybe 60
mb/s on a bad day (this is with a couple of large files).
Doing the same operation with GVFS, I'm lucky to clear 20 mb/s (27-28 is
the highest I've seen).
Doing the same operation with rsync -W and the GVFS mounts, I get no
more then 1mb/s (Same with dd).
Doing the same operation with rsync over ssh, I get ~50 mb/s.
If I run the same operation a few times (to account for a warmed cache),
rsync -W (i.e. no deltas) leaps to 100 mb/s (line-speed essentially),
while GVFS Samba will chug at about 50 mb/s.
So rsync over ssh goes from 50MB/s to 100MB/s and GVFS over Samba goes
from 20MB/s to 100MB/s?
What happens if you use smbclient or a CIFS mount?
Sorry my phrasing was unclear:
Rsync -W over SSH manages a good 50 - 100 MB/s no problems (accounting
for things getting cached on successive runs). This is still the top
performer - once the cache is warmed, it'll saturate the line.
GVFS over Samba manages 20-50mb/s tops.
GVFS over SFTP manages 33 mb/s very consistently - probably being
limited by SSH interaction though based on the rsync result it should be
faster.

You need to be specific that via SFTP you are getting the same crypter;
the selected crypter when transferring data over SSH can be a big
difference.

A CIFS mount with default settings seems odd: rsync -W manages about
50mb/s, copying with Nemo cuts that down to 1-2 MB/s somehow.

I do not use CIFS via GVFS much, but I use WebDAV all day - one thing to
check is if 'CIFS client + GVFS' is causing redundent calls for
properties [list, name, date created, etc...].  With WebDAV you see *at
least* two requests for all required information, which hurts
performance pretty badly.  Is 'CIFS client + GVFS' doing the same thing?

You can run GVFS in debug mode with something like - 

killall -15 /usr/lib/gvfs/gvfsd/ \
 /usr/lib/gvfs/gvfs-udisks2-volume-monitor \
 /usr/lib/gvfs/gvfs-afc-volume-monitor \
 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor \
 /usr/lib/gvfs/gvfsd-trash \
 /usr/lib/gvfs/gvfsd-burn \
 /usr/lib/gvfs/gvfsd-metadata \
 /usr/lib/gvfs/gvfsd-http \
 /usr/lib/gvfs/gvfsd-dav
GVFS_DEBUG=1 GVFS_HTTP_DEBUG=all  /usr/lib/gvfs/gvfsd -r

That whacks the running GVFS and starts a new one on the terminal.
There may be GVFS_ environment vars specific to the CIFS backend.

Running like GVFS makes it clearer what it is doing.

Using dd if=file | pv -B 10485760 | dd of=mount on the CIFS mount seemed
to give the best performance with CIFS - 70-80mb/s, bursting up to 100
quite easily. The big buffer size in pv seemed to help a lot, since
performance wasn't great with a smaller buffer.

Which would be operations on a single file with little to no directory
walking - which sounds like the performance issues I see with WebDAV.
Save ["PUT" in WebDAV] is fast, any operation involving file folders /
properties is very slow.

Looking at the numbers, there might just be an issue with Samba -> Samba
communication on my network since CIFS and GVFS seem to maintain about
the same maximum throughput with their own commands. Though why Windows
-> Linux can manage much higher speeds is still a mystery.
I guess that still makes the big problem that block level commands are
so slow on GVFS, since my initial usage and frustration was pointing a
block-level app at a GVFS share and getting only 7-8 mb/s performance,
whereas to my hard drive I got 25-30mb/s (in this case, ripping a blu-ray).

-- 
Adam Tauno Williams <mailto:awilliam whitemice org> GPG D95ED383
Systems Administrator, Python Developer, LPI / NCLA



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]