[prev in list] [next in list] [prev in thread] [next in thread] 

List:       firewalls-gc
Subject:    Re: multilevel security in firewalls
From:       Marcus J Ranum <mjr () iwi ! com>
Date:       1995-07-31 13:41:06
[Download RAW message or body]


Ray Kaplan writes:
>The rigor of a
>trusted system design can (and is regularly) destroyed by misapplication,
>improper operation, and slip-shod management.  This certainly includes
>firewalls based on this technology.

	[I'm going to do an evil thing, here, and completely slide by
the rest of Ray's well-written and thought-out mail, because I think
that the paragraph above cuts directly to the heart of the matter.
I'm not picking on Ray here - but I'm going to use the paragraph above
to explain and illustrate why the multilevel security philosophy has
not, and never will, catch on, unless it's reformulated and remarketed.]

	The rigor of trusted system design is a market disaster and will
never succeed. When you talk to the trust engineers it's like talking to
a fredian psychologist. The logic isn't entirely circular, but if you've
bought into it, then it's inescapable.

	The view Ray presents above: "trusted systems are great, but are
just not being used right" is, in my view, a complete cop-out. The reason
trusted systems are not being used right is because the way they are
written they are UNUSABLE. Only someone who is forced to use them would
even consider touching them!

How has this happened? Well, it's a number of things:
	1) Technology moves too fast for formal, dogmatic paradigms
	2) Market-driven forces will not wait for formal methods
	3) Time to market is everything - vendors throw everything
		overboard (especially security!) to get it out the
		door on time

	What's happened is that in order to meet the insanely complex
design criteria of trusted systems, vendors have to design systems that
are obsolete before they even go into evaluation. For example, by the
time a vendor has rewritten their X server for CMW, it's 2 revs out
of date, obsolete, buggy, and lacking support for the latest device
drivers. In a technological market like the one we're in, if your
product cycle is greater than 8 months-to-market you are TOAST.

	At this point the trusted system mavens usually raise their
hands and say, "Time to market isn't everything! I'd rather have
security." The problem is that most of their users would rather have
Windows 95, Photoshop, the latest version of MSword, BSD4.4, etc.
Look at the evaluated systems out there: they are all obsolete and
you can hardly run anything interesting on them. So the mission critical
systems get built on foundations of sand (no security) because the
secure systems suck too much to contemplate using. Give most users
a choice between CompuServe and DOCKMASTER and see which wins.

	The reason trusted systems are mis-deployed is because they
are terrible for real world use.

	Trusted system guys have to stop telling the people who are
trying to get real work done "nononono! you're using it WRONG!" and
should spend thier time trying to make trusted systems that are easy
to use, with the security features completely hidden from the user.

	But don't bother - it's too late. The installed base of
insecure systems and practices is too large to be replaced and
the demon of backwards compatibility rides all our backs. There
might have been a time when secure computing could have become
the norm but now it's too little, too late. I've had the pleasure
of addressing the Association Of Computer Security Greybeards and
when I've said this sort of thing their reaction is one of horror.
"Trustworthy computing is almost a reality! Don't throw the baby
out with the bathwater!"  -- the sad fact is that it's been 10 years
of effort and all that's come of it is obsolete software that is
5 years behind the market curve. The baby never even got close
to the bath.

>The only thing I'd add is that the "make them be treated differently" be
>stiffened to something like "proveably force them to be treated according
>to the security policy."

	Nope. Forget proofs. Come on. The proof guys have been
ploughing that field for years and have come up empty. The reality
is that proofs don't scale well with complexity, and in case you
haven't noticed, every release of every program is 10% larger and
more complex than the previous. The proofnicks have had their turn
and it's been a dead loss.

>>Why aren't more people doing
>>this? Because of compatibility and installed base and support
>>issues. I've seen grown men start to cry when they even THINK about
>>running a labelled network...
>Indeed.  However, those who are serious about this do it - painful as it
>is.  Yoda (Star Wars Jedi Master) was right:  "There is only do or not do,
>there is no try." 

	More trusted system philosophy: "if you don't use trusted
systems you are clearly not concerned about security."  That's nonsense.

	Only people with a lot of money and a lot of time can afford to
bother and they usually do their important computing (where the work
REALLY gets done!) on PCs at home! Sometimes trusted system think
reminds me of those guys who'd rather ride a Harley Davidson hardtail
than anything else in the world. "Sure it's slow, corners like a hippo,
brakes like a banana on a greased cookie tray, drips oil, and sounds
like a trainwreck - BUT IT'S A HARLEY"  --  "Sure, it's Version 6 UNIX
with no TCP/IP and no windows and it's slower than mud and I can
only run it on hardware that is slower than my toaster oven's clock
chip but it's A1!"

	I've been working with a lot of people lately and I haven't
actually run into anyone in the commercial space IN MY ENTIRE CAREER
who has been actually deploying trusted system technology. Perhaps
you have, but from my viewpoint it looks like a total rout.

	It's not that people are not serious about trusted systems,
it's that trusted system designers aren't serious about producing
useable systems.  [Actually, they are, they just haven't succeeded]

>        1) The attempt to invent a new evaluation criteria (in the form of
>        a remake of the U.S. Trusted Computer System Evaluation Criteria
>        (TCSEC) into The Federal Criteria) seems to have failed - the Orange
>        book looks like the status quo for a while.

	Yep. The attempt appears (from here) to have been to write
an envelope criteria that could be stretched to cover ANYTHING so that
way people could actually get what they want to use in the door. It
failed but I think it was mostly because the documentation was so
big and arcane that nobody except the authors had time to read it all. :(

>        2) It looks like the attempt to have a nice compromise in the form
>        of the mix of security features found in the Compartmented Mode
>        Workstation (CMW) has fallen into disfavor.

	That was an interesting effort. From where I stand, the CMW
effort was an internal revolution against trusted systems, playing
within the rules. If you look at some of the things in CMWs they
were anathema to the hardcore trust engineers. My take on it was that
the users were Sick and Tired of having workstations with no windows.
CMW seems to have been an elaborate maskirovka to get NFS and X-windows
into DOD computing.

	I suspect it's failed because of time to market. Even the vendors
have got to hate having to maintain stuff that's 2 revs out of date
because of the evaluation.

>Smail is great. However, if
>you are REALLY going to wall off mail, it takes trusted technology that
>actually implements a security policy to control the upgrade/down grade
>issues between compartments/levels.

	The SMG (if that's what you're referring to) is a case in
point. Give me one of the current SMGs and I can configure it to
run TCP/IP over Email, and do NFS into and out of a classified
environment. I believe this little loophole is being fixed but the
whole problem is one of those "emperors new clothes" type deals.
If you allow ANY large amounts of data in or out, I can run IP
over it. Period. All you can do is make it slow and expensive. The
long and short of the story is that it's a wasted effort. If the
data needs to be absolutely secure: isolate it.

>I propose that this is because the "99% of the things that
>people want to do" are not given due consideration in the light of a
>rigerous risk analysis.

	Of course not! It's usually considered in terms of time to
market and productivity gains.

> As examples, consider that object linking and MIME
>are wonderful new things that everyone wants to do.  However, the lack of
>security in the current designs that are seeing widespread implementation
>is profound.  That is, no one stopped to do a risk analysis.  Had they done
>so, I believe that the rigor of using a trusted system to enforce a
>security policy that was targeted at reducing these risks would have become
>common place by now.

	Hell no!

	Run trusted systems just so I could do MIME? No WAY! I'll
just do MIME and bash some stuff into the interpreter to make it a
little better and ride the tiger. That's what 99% of the people out
there will do.

	Rigor is nice but every time rigor gets put up against
technological progress, it loses. :(

	I'm not saying to trash rigor, but if you're going to be
doing in-depth risk analysis all the time, you're going to have
to make it fast or you'll get left behind. I've seen too many cases
where an organization has been thinking real hard about a firewall
and found out that while they were thinking the guys in the research
lab put in a T1 line. :(

mjr.

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic