Dailydave mailing list archives

Re: Re: Hacking's American as Apple Cider


From: Jason Syversen <jason.syversen () gmail com>
Date: Tue, 20 Sep 2005 22:17:14 -0400

Marcus (and those few real security practioners who identify with this 
perspective),

I wasn't going to comment on this line as I thought the answer was rather 
obvious to everyone else on the list but I couldn't resist after this latest 
email. I can't believe that someone who's been around as long as you Marcus 
doesn't recognize that the previous author's analogy DOES hold up, and that 
there really is unpublished security research going on. Dave, for example, 
hasn't published what he did when he was a government researcher. Neither 
has Jamie Butler. Or the laundry list of other smart people who used to or 
still work at those interesting places. Or their peers in China. What do you 
think they were doing there exactly, configuring firewalls? Writing Snort 
scripts?

The analogy to cryptography fits perfectly. They didn't USED to publish 
algorithms... and yes, the catastrophic analogy works just fine. You don't 
seem that that often now because most protocols are reviewed extensively 
before they are standardized. But are you familiar with any major commercial 
software packages that have worldwide security peer review before 
"publication"? And if you're looking for catastrophic breaks in unreviewed 
crypto, try GSM for an example. Or better still, WEP which was similar to 
that open source code that nobody looks at until it's ubiquitious. Those 
were pretty substantial breaks... or look at the hundreds of failed examples 
throughout history and still today that fail catastrophically. 

This email is long enough but I have a great example on that... my company 
uses a 2-phase shift cipher to encrypt user passwords on our corporate LAN. 
These passwords have a lot of value to an attacker. Not reviewed by anyone. 
I figured it out first by analyzing the encrypted password (cryptanalysis in 
crypto-speak). Then, I went through the code and found the algorithm (and 
the fixed shift sequence) in the Javascript source. (Kind of like "hacking", 
in computer-speak). What's the difference? Same vulnerability, different 
methodologies... is cryptanalysis "not cool" either?

Humands don't know how to create usable, secure things. We only know how to 
create stuff in a kind of secure way, and then analyze it, figure out where 
the holes are (or have someone else do it and hopefully inform us!), re-spin 
and try again. I wish that wasn't the way it works but it is. Taking away 
hacking from computers is like taking away crash testing labs... the good 
guys don't do analysis, we rely exclusively on data gathered in the field 
when real people are involved, since good guys wouldn't crash cars. Or break 
crypto algorithms. Or break into computers. Etc.

And not all software breaks are instantly catastrophic. Many papers or demo 
code is just improving the state of the art, much like an attack against a 
8-step feistel. Look at the recent papers on bypassing stack protection in 
windows xp SP2... comes from "hacking", doesn't lead to the end of the world 
in one fell swoop. 

There's a big different between "hacking" and releasing virus creator 
programs, running botnet servers, or writing proof of concept code with 
"insert your payload here" and releasing it on
"scriptkiddy.net<http://scriptkiddy.net>",
just like there is in cryptography research and selling reverse engineered 
DirectTV cards, SIM Card cracking tools, etc. 

You talk about "real engineering" a lot... maybe that's the problem. 
Engineering, and research, are DIFFERENT. I'm an engineer, who has done 
both. They are totally different animals... engineers are usually focused on 
building something that works within constraints, much like your bridge, 
castle, and 20 other examples... researchers are interested in poking holes 
in theories or coming up with new ones. This black hole theory is crap and 
here's why. I propose a new theory about the stupidity of group dynamics. 
This algorithm/system/whatever is secure (I can't prove it usually, just 
speculate!). Or I claim it's not and here's my proof. That's research... and 
hacking is just a very applied form or research. (Lots of writing code to 
prove theories, instead of writing equations, interviewing patients, etc.)

Hacking is the art/science of analyzing hardware/software systems to find 
vulnerabilities... cracking is the practice of exploiting those 
vulnerabilities to compromise systems... cracking, or the act of providing 
"material support" is the problem, not the R&D areas of vulnerability 
analysis, proof of concept implementation, or publishing findings. Perhaps 
the two are getting confused.

- Jason

On 9/20/05, Marcus J. Ranum <mjr () ranum com> wrote:

pageexec () freemail hu wrote:
i thought it was pretty obvious as we have an analog situation with
cryptography. and you are not advocating a worldwide ban on public
crypto research and development, are you?

Like most analogies, it fails if you pull it far enough. The idea of
making cryptographic designs public is because it has been
determined that it's fruitless to try to keep them secret. So it's
easier to start off with them public because it lowers the barrier
for an expert to get involved in analyzing them. Note that this
is a fairly recent philosophy; a lot of secrecy used to surround
algorithms, too.

But the important difference between crypto research and hacking
is that it's rare that a new crypto discovery is going to suddenly
make a vast number of users become vulnerable. For example,
when DES differential cryptanalysis was revealed (there are
indications that it was known about when the algorithm was
developed) it's not as if DES suddenly stopped working, or every
piece of DES-protected data suddently became a portal into
the host it was running on. The downstream effects of publishing
cryptographic research are not quite as sudden as the downstream
effects of publishing hacking "research."

And, lastly, you neglect to mention that most of the interesting
code-breaks in history were kept secret by the discoverers for
quite some time. Indeed, most of the interesting research in
code-making was kept secret by the discoverers for quite some
time. You still don't know how Type-1 crypto works, and if you
do, you'll just smile and nod. :)

in both hacking and crypto
we're finding and exposing flaws in someone's thinking (or lack thereof,
as it is often the case), and i don't see why that'd be the dumbest
idea. unless you want to live in a dumb world, that is.

Your analogy fails because you're talking about popular myths,
not reality. Yes, there is an active community that is doing public
cryptographic research and they do it openly. They are just
a pale shadow of the classified research that goes on and
has been going on for a lot longer.

I actually hope that your analogy is NOT good - that there are no
deeply classified hacking research groups inside various national
intelligence forces. Rumors float about such things but we won't
know one way or another for a long time, is my guess.

When you're designing a system that is going to come under
attack you're going to design it to withstand currently known
attack paradigms, reasonable extensions of those current
attack paradigms, and speculative new attack paradigms.
Or you're not doing your job. So, what does that mean?

- Cryptosystems:
-> make sure it resists brute force attack
-> throw in a fudge factor on brute force (i.e.: all the processors in the 
world
shouldn't be able to break it in under 10,000,000 years)
-> make sure it resists differential cryptanalysis
-> think about extensions to differential cryptanalysis and see if they
might apply (i.e.: think about whether your algorithm is a group
or whatever)
-> make sure it resists key size timing guess attacks
-> think about extensions of key size timing attacks and see if they
might apply (i.e.: what other kinds of information might leak
as a result of variance in processor speed)
...etc

- Bank Safes
-> Make sure it resists brute force attack:
a) explosive
b) thermal - hot
c) thermal - cold
d) thermal hot/cold
e) chemical (oxidizer, corrosive)... etc
-> Make sure lock will resist brute force attack:
a) large enough keyspace
-> Make sure it can be attached to the floor
-> Make sure the floor can be attached to the floor
-> etc.

- Castle walls
-> Make sure wall material resists current state of attacks by thickness:
a) projectile
b) shaped charge
c) tunneling
-> Make sure walls are deep enough to prevent undermining
-> etc.

OK, I won't bore you with fleshing these out ad nauseam. But an expert
castle-builder is going to understand the parameters for what are needed 
to
build a strong castle. And, yes, technologies change. For example, there
are all kinds of nice brick civil-war-era forts that were designed to 
withstand
smoothbore cannon for months that would be battered to bits by rifled 
cannon
in days. The reason they would still last days (instead of minutes) is 
because
of engineering overhead in the assumptions about the wall thickness.

Transformative shifts in attack paradigm may cause catastrophic failures.
But they are few and far between. Incremental improvements in attacks
should be within the engineering overhead of good design. Same
applies with crypto or with other security systems. So, if you have a
system that was designed well by someone who thought through
the attack paradigms of the day, then testing it destructively is not
going to make sense.

You can, and should, unit-test components of a top-down design.
For example, when the Golden Gate Bridge was constructed, there
was no technology on earth that could destructively test the main
cables used to support the spans. By design, not even the bridge,
itself, would stress them enough, because they had an engineering
overhead of something like 25 times the weight they were expected
to support. So what do you do? You note the assumptions of
the portions of the system that ARE testable and test them as
part of your overall construction. In the case of the Golden Gate
they sample-tested many of the cable-strands before they were woven
together into the big spans. That's how real engineering is done.

Take that example into a firewall or a web server. If your design
is that the httpd implementation is chrooted into a lock-down environment,
then you verify that until you're confident that it is. How? There are
several possible tests: kill the process so it leaves a core file, or
look at the process' root inode in a kernel debugger, or whatever.
This is how real engineering is done. It's a disciplined process.

Many people have commented on my observation that airplane
builders eschew "penetrate and patch" - but yet whenever they
lose a plane, NTSB investigates the causes and they rework
the design for that problem. But that's not "Penetrate and patch"
as much as that's identifying new failure paradigms and
dealing with them. Penetrate and patch is where you have
beginning C programmers write critical parts of your code,
ship it, and then let Litchfield find the bugs, then turn the
bugs over to another beginner C programmer to write a patch,
and repeat the process indefinitely. Real engineering is when
you design your system to resist all the currently known
failure paradigms - and you insert checks in hopes of detecting
the unfortunate situation in which you're the involuntary "discoverer"
of a new one.

In airplane land I've heard this described as follows:
New Pilot: "OK, I've got it. So I should always follow THE BOOK?"
Old Pilot: "That is correct. ALWAYS follow what THE BOOK says."
New Pilot: "Well, what if something goes wrong that's not in THE BOOK?"
Old Pilot: "That's highly unlikely."
New Pilot: "Yeah, well, what if it DOES?"
Old Pilot: "Then do whatever you think is best, and try to survive. And if 
you
survive we'll add what you did to THE BOOK."


an interesting consequence of your opinion is that unless you want to
admit to have practiced this dumbest idea yourself, you cannot know
what hacking is. so how can you have an opinion on it?

I didn't say I never have practiced dumb ideas!! That article was the
distillation of many many years of dumbness - on my part and on
the part of others. I'm hoping that by sharing mistakes, many of which
are my own, I can help you avoid making them yourself.

on the 'default permit' issue: it is not the dumbest idea, it is the
only way that can scale in systems. take a (not exactly big by any
measure) company with 1000 users and 1000 executable files that these
users need. that's an access control matrix with a million elements.

You're assuming an enterprise with 1000 users each of which has
a 1000 individual executable load-out? That'd be dumber than dumb.
In order to address exactly the example you're raising a lot of
organizations have a "common desktop environment" etc. Of course
a lot of users hate that. I often can't sleep at night because I am
crying for them.

mjr.



Current thread: