Re: Archive signatures versus message digests



On Wed, 2011-11-16 at 20:34 +0100, Olav Vitters wrote:
> On Wed, Nov 16, 2011 at 06:49:39PM +0100, Guido Trentalancia wrote:
> > On Wed, 2011-11-16 at 17:50 +0100, Olav Vitters wrote:
> > > On Wed, Nov 16, 2011 at 04:59:55PM +0100, Guido Trentalancia wrote:
> > > > ftp.gnome.org is currently distributing GNOME sources without a proper
> > > > signature.
> > > 
> > > That is just our primary mirror. This is handled by master.gnome.org.
> > 
> > I cannot reach such host (master.gnome.org) either by FTP or HTTP...
> 
> Correct. We only provide files through our primary mirror.
> 
> > Therefore I don't know what you are talking about. I suppose it must be
> > an host that the mirrors use for syncing by using some sort of private
> > protocol or whatever. This is well beside the interest of the end-user.
> > I am the end-user not a webmaster or someone dealing with GNOME servers
> > infrastructure...
> 
> Like most mirrors, just normal rsync.
> 
> > > > A good advice I would give you is to get rid of the various message
> > > > digests such as MD5 and SHA and start using OpenPGP signatures (they can
> > > > be easily created by scripts using free software such as
> > > > http://www.gnupg.org/download). Ideally the secret key should be kept on
> > > > a machine different from the distributing server. Ideally such machine
> > > > should not be connected to the Internet at all and it should be only
> > > > used to sign the packages and upload them to the distributing server
> > > > along with the detached signatures.
> > > 
> > > The SHA256 is to ensure integrity, not security.
> > 
> > Yes, I've used the term "security" improperly because I wrote the
> > message in a rush: "integrity" was meant in place of "security".
> 
> Ok. Made me misjudge a lot.
> 
> > > Not sure why you suggest not connecting it to the internet, while still
> > > having it upload stuff.
> > 
> > Once again, do not take my terms too strictly. Upload might also refer
> > to transferring data through removable external data storage devices. Or
> > it could transfer data through a connection on an internal private LAN
> > interface isolated from the Internet (as in the opposite of DMZ, ever
> > heard of that)...
> 
> I don't see any of them as practical.
> 
> A removable external data storage: Only a few people have physical
> access to the machines. These people are not involved with GNOME. It
> makes GNOME releases practically impossible to wait for them to go to
> the machines (they go every 3 months or so), while benefit is 0.
> 
> Internal private LAN: so nobody can access it, and no tarballs get
> released.
> 
> > But again, we are really going behind the scope of my message here. I am
> > just an end-user, suggesting to use digital signatures instead of almost
> > useless message digests provided by the same server.
> 
> An insecure gpg signature is worse than SHA256.
> 
> > > What I thought was having a HSM. The goal would be to validate that the
> > > tarball downloaded from ftp.gnome.org is the same as the file on
> > > master.gnome.org.
> > 
> > Validate when ? Upon retrieval ? What if the attack takes place on one
> > of the mirrors immediately after retrieval (and validation) from
> > master ??!?
> > 
> > OpenPGP signatures are so easy to use and they are both standard
> > practice and good practice...
> 
> You said earlier you didn't care about this possible attack vector?
> 
> > > Anything else seems pointless. If master.gnome.org is compromised, it
> > > seems logical you can also create the signature.
> > > 
> > > > Message digests used the way you are using them are completely useless
> > > > in my opinion (and that of many others).
> > > 
> > > I don't care how many people agree or not agree about something.
> > 
> > It's best practice before being standard practice.
> 
> Give me a good argument, not that many people agree or "best practice".
> I decide on merits.
> 
> > > It is to check integrity, nothing more.
> > 
> > Integrity against random errors during data transfer across the Internet
> > is already being checked by the TCP/IP. Used that way, message digests
> > are not a valid measure against integrity problems due to intentional
> > security breaches.
> 
> I used SHA256 to verify that the file on my USB key was corrupt. The USB
> key was corrupting the files you wrote to it. I also once had bad
> memory.
> 
> SHA256 is provided to check integrity, nothing more. If people think it
> is there to provide security, then they're misguided.
> 
> > > > If the ftp/web server is compromised (take for example the recent attack
> > > > on kernel.org), an attacker would be able to replace BOTH the file AND
> > > > the message digests. If the attack is carried out by a man in the
> > > > middle, then a compromised archive could be sent/injected along with a
> > > > compromised message digests.
> > > 
> > > If you compromise the machine where you upload the tarball, it does not
> > > matter if you compromise the HSM or another machine or not. You can
> > > still have it change the signature.
> > > 
> > > > Message digests located at the same server provide no security benefit
> > > > at all to the end user (they might perhaps provide a little benefit when
> > > > located at a different server for the first kind of attack depicted
> > > > above, I suppose).
> > > 
> > > Your assumption that they're for security is incorrect.
> > 
> > So what are they for ? And how are you going to provide integrity
> > assurance against intentional security breaches ?
> 
> We don't provide that.
> 
> > > > Is there any specific reason for not using OpenPGP signatures (which is
> > > > standard practice amongst other things) ?
> > > 
> > > We don't have a HSM in the machine. Other than that, there is already a
> > > bug about adding a HSM, just not implemented.
> > 
> > I take you mean Hardware Security Module for HSM. But I am not following
> > you much on this topic (see above). You don't necessarily need any
> > additional hardware for creating digital signatures...
> 
> You're not interested in details, but ignoring details would result in
> false sense of security.
> 
> At the moment we only have the SHA256 and *no* guarantee at all that the
> files are secure. If we were to sign tarballs and introduce a bug where
> any random person able to upload tarballs which get signed, it is worse
> than the current state. Meaning: it has to be secure, not just
> available.
> 
> > > https://bugzilla.gnome.org/show_bug.cgi?id=645565
> > > 
> > > > See for example ftp.gnu.org... They are already using this scheme !
> > > > They've always been using this scheme !
> > 
> > > Not high on priority list as IMO it requires a HSM, plus I think a
> > > compromise of the machine which uploads stuff will still result in
> > > signed tarballs, lastly nobody ever seems to use the SHA256 hashes (some
> > > are incorrect). Would still be nice.
> > 
> > If the machine which "uploads stuff" is not connected to the Internet
> > (see above, for example) and is handled and configured properly, I
> > cannot see how it could exposed to security breaches.
> 
> Your assume it does not have to be connected. The machine has to be
> connected to the internet. How else would maintainers upload anything?
> Having a few machines in between which cannot get compromised is nice.
> But there has to be a point where a maintainer can upload his tarball.
> If that is compromised, it does not matter if the rest is still 100%
> secure. The tarball will be uploaded+signed.
> 
> > In any case, you can always refer to gnu.org for further clarification
> > on how they are actually implementing the scheme in practice.
> 
> I'm already aware of how Fedora does it. They use a HSM.

I believe I have already somewhat replied to you about my opinion on the
use of HSM. But because I had the impression that you do not trust much
my opinion and strictly speaking I am not an expert in cryptography, I
shall quote an excerpt from "Network security essentials: applications
and standards" 3rd edition by William Stallings (published by Pearson
Education Inc. - Copyright 2007 - italian edition):

Capitolo 3 - Funzione hash unidirezionale

[...]

- La cifratura software è piuttosto lenta. [...]
- La cifratura hardware ha costi non trascurabili. [...]

Because I only own the italian version of the book, the best I can
provide is an unauthorised and liberal translation of the above back to
english:

Chapter 3 - Unidirectional hash function

[...]

- Software encryption is rather slow. [...]
- Hardware encryption bears higher costs. [...]

Omitted parts indicated as "[...]" are not relevant to this discussion.

Also consider that Redhat, being a supplier of systems to the US
government, might have legal obligations towards it to use NSA or at
least NIST certified cryptographic equipment instead of uncertified
open-source software such as gpg (www.gnupg.de) that I had proposed to
you as an initial affordable solution fit for purpose of many home users
provided that gpg is in turn secure and provided that the algorithms
being used are secure enough.

But if you really never heard anything like this before, then a good
introductory article for the general public is the following one:

http://www.bbc.co.uk/news/uk-england-gloucestershire-11475101

Of course other algorithms can be invented and created if those provided
at no cost by gpg do not suit your taste or if you can prove that they
are faulty or too weak.

Kind regards,

Mr Guido Trentalancia



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]