Re: More desktop security thoughts (was Re: GNOME privilege library)

On Fri, 2005-01-14 at 14:38 +0000, Mike Hearn wrote:
> On Thu, 13 Jan 2005 20:31:50 -0500, Havoc Pennington wrote:
> > I don't think it's that complicated. Generally speaking just getting the
> > "what normal users are allowed to do" line right solves the problem.
> Right, it's not all that complicated. My main point is that in a home
> setup, "normal users" are often on their own and they shouldn't have to
> know a root password in addition to their personal one. But unless you
> relax UNIX security almost entirely at some point that *will* happen, and
> at that point as you never told the user what the root password is they're
> stuck.
> So I don't see any scalable way to get this right without pretty much
> disabling DAC security.
> > The only time end users need the root password is when we have a
> > technology bug that makes capabilities insufficiently fine-grained to
> > properly say "users can do X, but not Y"
> > 
> > Think about it: why should anyone need two passwords? The system should
> > know what each user is allowed to do based on the user's role. The user
> > shouldn't have to explicitly change "who they are," that's just
> > annoying.
> Right, and hard to explain and insecure etc etc
> > So anytime you have to auth as root it's pretty much a bug as far as I'm
> > concerned. But just setting all uids to 0 is equally dumb because it
> > gives you a lot of capabilities you don't need which leads to broken
> > systems and malware.
> Are you sure they're not needed? Eg being able to install software
> basically implies being able to arbitrarily modify nearly any system
> setting. Oh sure you can say "only RPM can install software", but 3rd
> party ISVs like game developers will still produce Loki Setups and the
> like so you'll end up with software dumped in $HOME. And it's not any more
> secure against malware because it's trivial to synthesise RPMs at runtime
> then "install" them to get any effect you like. In other words, you might
> as well just let any user scribble over /usr at will.

This is wrong.  The problem is that you are assuming all users are
ignorant and thus punishing the ones that aren't.  If all users are
effectively running as UID 0 then any malware that gets installed (say,
by a child grabbing something off the net, or a bug in your browser that
allows code injection) also has UID 0.

If you separate the privileges and require that gaining the "install
software" privileges requires that the code path go through a particular
utility, like a single software installer, you can put checks in one
place and know they can't be circumvented.  For example, sure a malware
could craft an RPM on the fly but can it sign it with a key you trust?

Yes, most users don't understand signing.  Most users probably just
click "Install."  However, as soon as you remove that check, then the
users who *are* experienced enough to, at the very least, know not to
Install from untrusted sources will have no way to protect themselves.
You now pretty much say, "your system is permanently insecure because
your neighbor is too ignorant."  That's wrong.

And yes, there is a distinct difference between installing stuff in /usr
and installing it in $HOME in the event that you have several accounts.
If my girlfriend somehow gets a trojan installed into her account it
will only affect her account and not my files - imagine if my
(relatively) computer-inexperienced girlfriend's actions could infect
the source code repositories I have access to from my account!

The UI part of this is irrelevant at this point.  I don't care if you
decide to prompt for a root password, the user's password, just use a
confirmation dialog, or whatever.  There needs to be a barrier between
the user and the system.  If you ship "Red Hat Linux Home Edition" the
default could be for that barrier to be invisible *by default*, but a
knowledgeable user who installs it (or gets it pre-installed on their
Dell or whatever) should be able to go into the System Security
Preferences and enable the confirmations/prompts if they feel they are
smart enough to make effective use of them.

> Here's another example: Linux only lets root processes raise their thread
> priorities. That makes sense on a server but it's an appcompat problem on
> the desktop because Win32 apps sometimes assume they can do this and break
> in subtle ways if they can't. So now Lucy has to run Dungeon Siege via
> sudo or whatever.
> How many other examples are there of random stuff like beeping the
> speaker which need root today? I don't know. But probably lots. 
> Relaxing DAC security and effectively replacing it with MAC may sound
> scary and broken, but that's just because we're all coming from a UNIX
> background. We're so ingrained in the idea that *users* shouldn't have
> certain privileges that anything else seems weird. The end result should
> in theory be more secure than before, because instead of relying on not
> running as root app authors can actually assign their program only the
> privs it needs.

Slightly off-topic, but my only "fear" with replacing DAC with MAC is
that the current MAC-for-Linux system around that's got any traction is
SELinux, and even the SELinux gurus seem to have trouble getting a
general-use working system locked down effectively with it.  It's too
complicated to configure and use.

Taking from the GNOME design paradigm the configuration and management
of SELinux is tied to the implementation details and not from a top-down
"what does the administrator/OS-developer want to do."  There's no way
I'll ever really trust a system that's that insanely complex to
configure. </rant>

> > To solve viruses, the right approach is to limit that installer program
> > somehow (enforce signatures? only allow GUI usage, no scripting?)
> So, I think there are two types of program we want to protect against:
> (1) Viruses/worms which spread automatically by exploiting flaws in the 
>     system construction (buffer overflows etc)
> (2) Spyware/trojans/BackOrifice/Phatbot-style apps and so on which
>     generally spread manually via social engineering
> Some use both.
> Solving two is really hard, and boils down to providing the user with
> enough information to make a good decisions. That was the theory behind
> SSL security - it's automatic but if the math doesn't work out give the
> user a big warning explaining what's wrong. Effectively it was a
> distributed whitelist for secure websites.
> And for a few years it worked quite well. The big mistake SSL made was
> that getting onto that whitelist was far too hard. The big CAs (a) charged
> lots of money and (b) were generally Evil anyway. So now most secure
> websites I visit show me this dumb warning because Red Hat or navi or
> whatever just generated an Apache snakeoil cert. It's not so bad for
> regular users as most e-commerce sites still suck it up and get a regular
> certificate, but SSL warning fatigue definitely got worse lately.

This could be "fixed" by simply changing the warnings to errors - if a
remote site doesn't have a valid certificate then don't give the user an
"ignore" option.  This of course must be accompanied with an easy way
for admins to install system-wide certificates (instead of having to
install it in each individual application) and of course an easier way
to get authenticatable certificates.  ( looks like a good
start, although they have a long way to go to work the bugs out of the
management and get things settled and working smoothly.)

Basically, be pro-active about security.  When your choices are a) make
it simple for the user or b) make sure their credit card isn't stolen,
you really need to go for b.  Easy is worthless if all you can do is
easily get screwed.  Computers exist to help people, not fool people
into thinking they're being helped.  Remember, we're talking about "Ease
of use" and not "ease of misuse."

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]