Re: Problems tranfering files over shell link (ssh).

On Mon, 5 May 2003, Pavel Machek wrote:

dd on the remote size fails to read the whole block of data from ssh.
That's why the block size was reduced in version 4.6.0, but it's still not
enough.  Reducing the block size to 1 would cure the problem, but it would
make the transfer very slow.  Since dd reads less data than it should, the
rest of the data goes to the subshell, and it usually exits.

Perhaps bug in debian schould be filled?

This is left to the exercise of Debian users.  I normally don't report
problems in software I don't use myself.

I've just checked the source of fileutils-4.1, and dd doesn't try to read
the rest of the block if part of the block has been read.  safe_read()  
merely protects against reading nothing at all if the read() call is

I understand that the kernel always tries to fill the whole buffer, but in
this case we are dealing with ssh running dd as a child, possibly in a
pseudoterminal (not sure about that).  It's hard to expect to get complete
blocks from a program that is meant to provide low latency for users over
potentially slow network connections.

We should do bs=1; data going to shell by mistake seems extremely
dangerous to me.

Maybe.  It's still faster to do one system call per byte than to have
several calls (e.g. a program that decodes bytes from hex representation).

Actually, dd is a bad choice for the remote command because it ignores
incomplete reads, but there is no other widespread utility that would read
a certain amount of data and then stop.  I hope I'm missing something, but
I asked in the mc-devel mailing list and nobody could come with anything
better than dd.

head -c ?

I'm concerned about its portability.  Also, some versions may stop on
binary zeroes.  "head" is meant for text processing after all.

Pavel Roskin

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]