Dailydave mailing list archives

Re: Re: Hacking's American as Apple Cider


From: "Marcus J. Ranum" <mjr () ranum com>
Date: Tue, 20 Sep 2005 19:38:45 -0400

pageexec () freemail hu wrote:
i thought it was pretty obvious as we have an analog situation with
cryptography. and you are not advocating a worldwide ban on public
crypto research and development, are you?

Like most analogies, it fails if you pull it far enough. The idea of
making cryptographic designs public is because it has been
determined that it's fruitless to try to keep them secret. So it's
easier to start off with them public because it lowers the barrier
for an expert to get involved in analyzing them. Note that this
is a fairly recent philosophy; a lot of secrecy used to surround
algorithms, too.

But the important difference between crypto research and hacking
is that it's rare that a new crypto discovery is going to suddenly
make a vast number of users become vulnerable. For example,
when DES differential cryptanalysis was revealed (there are
indications that it was known about when the algorithm was
developed) it's not as if DES suddenly stopped working, or every
piece of DES-protected data suddently became a portal into
the host it was running on. The downstream effects of publishing
cryptographic research are not quite as sudden as the downstream
effects of publishing hacking "research."

And, lastly, you neglect to mention that most of the interesting
code-breaks in history were kept secret by the discoverers for
quite some time. Indeed, most of the interesting research in
code-making was kept secret by the discoverers for quite some
time. You still don't know how Type-1 crypto works, and if you
do, you'll just smile and nod. :)

in both hacking and crypto
we're finding and exposing flaws in someone's thinking (or lack thereof,
as it is often the case), and i don't see why that'd be the dumbest
idea. unless you want to live in a dumb world, that is.

Your analogy fails because you're talking about popular myths,
not reality. Yes, there is an active community that is doing public
cryptographic research and they do it openly. They are just
a pale shadow of the classified research that goes on and 
has been going on for a lot longer.

I actually hope that your analogy is NOT good - that there are no
deeply classified hacking research groups inside various national
intelligence forces. Rumors float about such things but we won't
know one way or another for a long time, is my guess.

When you're designing a system that is going to come under
attack you're going to design it to withstand currently known
attack paradigms, reasonable extensions of those current
attack paradigms, and speculative new attack paradigms.
Or you're not doing your job. So, what does that mean?

- Cryptosystems:
        -> make sure it resists brute force attack
        -> throw in a fudge factor on brute force (i.e.: all the processors in the world
                shouldn't be able to break it in under 10,000,000 years)
        -> make sure it resists differential cryptanalysis
        -> think about extensions to differential cryptanalysis and see if they
                might apply (i.e.: think about whether your algorithm is a group
                or whatever)
        -> make sure it resists key size timing guess attacks
        -> think about extensions of key size timing attacks and see if they
                might apply (i.e.: what other kinds of information might leak
                as a result of variance in processor speed)
        ...etc

- Bank Safes
        -> Make sure it resists brute force attack:
                a) explosive
                b) thermal - hot
                c) thermal - cold
                d) thermal hot/cold
                e) chemical (oxidizer, corrosive)... etc
        -> Make sure lock will resist brute force attack:
                a) large enough keyspace
        -> Make sure it can be attached to the floor
        -> Make sure the floor can be attached to the floor
        -> etc.

- Castle walls
        -> Make sure wall material resists current state of attacks by thickness:
                a) projectile
                b) shaped charge
                c) tunneling
        -> Make sure walls are deep enough to prevent undermining
        -> etc.

OK, I won't bore you with fleshing these out ad nauseam. But an expert
castle-builder is going to understand the parameters for what are needed to
build a strong castle. And, yes, technologies change. For example, there
are all kinds of nice brick civil-war-era forts that were designed to withstand
smoothbore cannon for months that would be battered to bits by rifled cannon
in days. The reason they would still last days (instead of minutes) is because
of engineering overhead in the assumptions about the wall thickness.

Transformative shifts in attack paradigm may cause catastrophic failures.
But they are few and far between. Incremental improvements in attacks
should be within the engineering overhead of good design. Same
applies with crypto or with other security systems. So, if you have a
system that was designed well by someone who thought through
the attack paradigms of the day, then testing it destructively is not
going to make sense.

You can, and should, unit-test components of a top-down design.
For example, when the Golden Gate Bridge was constructed, there
was no technology on earth that could destructively test the main
cables used to support the spans. By design, not even the bridge,
itself, would stress them enough, because they had an engineering
overhead of something like 25 times the weight they were expected
to support. So what do you do? You note the assumptions of
the portions of the system that ARE testable and test them as
part of your overall construction. In the case of the Golden Gate
they sample-tested many of the cable-strands before they were woven
together into the big spans. That's how real engineering is done.

Take that example into a firewall or a web server. If your design
is that the httpd implementation is chrooted into a lock-down environment,
then you verify that until you're confident that it is. How? There are
several possible tests: kill the process so it leaves a core file, or
look at the process' root inode in a kernel debugger, or whatever.
This is how real engineering is done. It's a disciplined process.

Many people have commented on my observation that airplane
builders eschew "penetrate and patch" - but yet whenever they
lose a plane, NTSB investigates the causes and they rework
the design for that problem. But that's not "Penetrate and patch"
as much as that's identifying new failure paradigms and
dealing with them. Penetrate and patch is where you have
beginning C programmers write critical parts of your code,
ship it, and then let Litchfield find the bugs, then turn the
bugs over to another beginner C programmer to write a patch,
and repeat the process indefinitely. Real engineering is when
you design your system to resist all the currently known
failure paradigms - and you insert checks in hopes of detecting
the unfortunate situation in which you're the involuntary "discoverer"
of a new one.

In airplane land I've heard this described as follows:
New Pilot: "OK, I've got it. So I should always follow THE BOOK?"
Old Pilot: "That is correct. ALWAYS follow what THE BOOK says."
New Pilot: "Well, what if something goes wrong that's not in THE BOOK?"
Old Pilot: "That's highly unlikely."
New Pilot: "Yeah, well, what if it DOES?"
Old Pilot: "Then do whatever you think is best, and try to survive. And if you
        survive we'll add what you did to THE BOOK."


an interesting consequence of your opinion is that unless you want to
admit to have practiced this dumbest idea yourself, you cannot know
what hacking is. so how can you have an opinion on it?

I didn't say I never have practiced dumb ideas!! That article was the
distillation of many many years of dumbness - on my part and on
the part of others. I'm hoping that by sharing mistakes, many of which
are my own, I can help you avoid making them yourself.

on the 'default permit' issue: it is not the dumbest idea, it is the
only way that can scale in systems. take a (not exactly big by any
measure) company with 1000 users and 1000 executable files that these
users need. that's an access control matrix with a million elements.

You're assuming an enterprise with 1000 users each of which has
a 1000 individual executable load-out? That'd be dumber than dumb.
In order to address exactly the example you're raising a lot of
organizations have a "common desktop environment" etc. Of course
a lot of users hate that. I often can't sleep at night because I am
crying for them.

mjr. 


Current thread: