Firewall Wizards mailing list archives

RE: Transitive Trust: 40 million credit cards hack'd


From: "Marcus J. Ranum" <mjr () ranum com>
Date: Sun, 19 Jun 2005 14:46:57 -0400

Brian Loe wrote:
Applying that to some of the systems I have been charged with administering
(and all thought on this subject is new too me - how unfortunate, eh?), they
considered all systems required to talk to it as trustworthy.

Right; that's one of the big mistakes in distributed computing. :( If you
are dealing with sensitive information, the system that's viewing or
accessing the sensitive information must be trustworthy. I'll give you
a fun example of this. Many many moons ago, I did some work for a
large financial processing company that moves and holds lots and
lots of people's money. At one point, I met with all the different security
guys for all the different parts of the enterprise, and a few of the security
guys from some of their partners. As the situation evolved I discovered
that:
        - Business partners would telnet in to the mainframe (through
                the firewall) and post significant transactions
        - The mainframe administrators' view was the "security was a
                handled problem" because they ran RACF and thus
                everything was OK
        - The business partners felt that any error in their account
                would be the service providers' fault, since it was their
                mainframe
        - The service provider's mainframe guys felt that any error in
                a transaction was (by definiton) the end users' fault
                since guarding their password was their responsibility

You can see how this all ended already. The mainframe guys refused
to recognize that their security depended on the end users' networks
and platform security. The business partners refused to recognize that
if they had a problem with their firewall and one of their accounts got
compromised that it would be trivial for a hacker to post transactions
with their account - and the mainframe guys were gleefully announcing
that if that happened they'd refuse to roll the transaction back because
it wasn't their problem.

The orange/red book guys and the trusted systems guys have
understood this forever. Getting this stuff right is why multi-level
computing basically never happened; data had to be labelled
and could never move down the classification heirarchy without
a bunch of angels dancing on the head of a pin, first.  If you
migrate those ideas to modern "distributed computing" the
equivalent would be to say:
        "Your desktop system is not going to be allowed to make
        queries of our mainframe until it satisfies our security requirements
        first. AND if it's going to retain any of that data (in cache, in free
        disk blocks, or a local database) it has to remain appropriately
        secured for the duration."

How many of you could tell your customers *that*?!   People scream
and whine over the idea of putting firewalls in (still) - now, attempting
to enforce a local policy against a business partner - that's patently
ridiculous. Right? Well, technically it's NOT ridiculous, but everyone
has basically blown it off.

Various
systems REQUIRED a certain level of access to do the job, so it was given.
This trustworthiness is static. If something changed on the trustworthy
system, the trusting system has no way of knowing about it and therefore it
never re-evaluated the trustworthiness

Right. That's a huge problem, as you can imagine. If your desktop
accesses my mainframe and its customer database, I *should*
assume that pieces of my customer database may still be cached
on your desktop. So I *should* ask for guarantees that it's not, and
that you can't move it transitively from there.

Again - that's not gonna happen in the "real world" - which is why
we're going to continue to see massive data exposures as spyware
and malware converge. Think of all of the chumps who are
accessing critical logistical systems and payroll systems from
the same computers they use to surf the web. The same computers
that have spyware on 80% of them.

Aren't there already models out there that fix this? That place a stage of
authentication and verification between each, or every other, transaction?

The models that "fix" this entail a great leap away from what we're
doing today. They entail either multi-level desktops (don't go there!)
or they entail networks that are segregated by trust. Multi-level
doesn't even work (if it did work) across entities because "highly
secure" for you doesn't map reliably to "highly secure" for me and
we'd have to standardize across organizations.

I can still envision an environment in which one organization might
require: "In order to connect to our system you need to be on a network
that is not connected to any other network, and none of your machines
are allowed to move into or out of that network." I'm sure someone
will correct me if it's no longer true, but I think SWIFT terminals used
to work that way. Automatic teller machine networks used to work
that way and no longer do. Which is why the Korean automatic
teller machines (and some US ones) went down from SQL slammer.

I don't think I'm telling tales outside of school here, but the recent
attacks on the supercomputer grid were transitive-trust based. A
user account at some research center got compromised - probably
a password stolen via a user logging in to his home system using
SSH on a machine with a keylogger. The attacker got onto the
researcher's home system, exploited a vulnerability, and backdoored
sshd. A while later, he had the administrators' password. And
backdoored sshd all over that research cluster, along with all the
SSH clients. Pretty soon he had account/password pairs at other
research facilities, as the researchers used SSH to propagate
their trust boundary. Etc. We used to call this "island hopping"
but it's also a pure transitive trust exploitation - if facility A
trusts the users and software at facility B, then facility A will
eventually fall prey to any successful attacks mounted on facility
B, even if only because screen-scrapers get installed at facility
B to harvest facility A's sensitive information.

If you worry about this enough, you'll realize that eventually there
are 2 ways to address it:
        - build multilevel secure computing systems (don't go there!)
        - say "f*** it"
Most of the industry has chosen the second option, but didn't even
bother to think about it. :)

mjr. 

_______________________________________________
firewall-wizards mailing list
firewall-wizards () honor icsalabs com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards


Current thread: