Dailydave mailing list archives

RE: Hacking: As American as Apple Cider


From: "Fergie (Paul Ferguson)" <fergdawg () netzero net>
Date: Fri, 9 Sep 2005 23:10:30 GMT

Oh, for fuck's sake -- get over it. Although I disagree with
Marcus from time to time, he's spot on on most of his assesments.

And a lot more entertaining than listening to this whining.

- ferg


-- "Kyle Quest" <Kyle.Quest () networkengines com> wrote:

Marcus is slowly loosing it... 
He's trying to build this utopia in his mind
and the more its complete the further he
gets from reality... 

He starts with the whole "white listing"
approach. He seems so convinced that it's
the silver bullet... It would be great
if it was true, but it's not. It's a great
approach... and it could be the idea
we should strive to achieve, but it's
not achievable... for a number of reasons.

First of all, systems would be impractical
and unusable. If you have an OS module
or an AV that blocked everything that's
not known to be good what would happen
a person bought a software that the AV
or the OS module didn't know about? It
wouldn't work, right. It's not very likely
that users would put up with that. Even if
we look at the application white listing
techniques used by the current host security
software, what's the story? Well, we have
an average user who gets this pop up asking
if he/she wants to allow application xyz
to run. In over 99% of the time the user 
says yes...

It's somewhat similar if we look at network 
based security mechanisms. There are times
when white listing works, but there are many
times when it doesn't. Let's say you have
a service provider that has who knows how
many customers. Do you think they'd be
able to get information about every single
web, ftp, etc server to create a "Default Deny"
policy? The task would be slightly easier
if there was no dynamically generated
content, but what if there was? 

I'm amazed how naive Marcus is when he writes:

"Now, your typical IT executive, when I discuss 
this concept with him or her, will stand up and 
say something like, "That sounds great, but our 
enterprise network is really complicated. Knowing 
about all the different apps that we rely on would 
be impossible! What you're saying sounds reasonable 
until you think about it and realize how absurd 
it is!" To which I respond, "What about the title 
'Chief Technology Officer' are you earning if you 
don't know what your systems are running and/or 
being used for?""

Let's say we have a large financial company
that has custom apps build by contractors,
third-party vendors, or even internal staff.
Does he really believe that a top IT exec will
know that a biz app built 5 years ago before he
even joined the company uses a whole bunch of named pipes 
and dynamic MSRPC services. And if he is aware
of the nature of this biz app network traffic,
how would he be able to deploy a "Default Deny"
system without knowing the low level details,
which are probably known only to a couple of
developers who are long gone?

All these things show that Marcus built himself
a utopia where people are perfect... where they don't
make mistakes... and where software is built with
no bugs in them. It would be nice if it was true,
but we all know we are not even close.

Now with his "Hacking is Cool" dumb idea he again
uses this naive black and white... bad and good
approach... oversimplifying the hacking and security
research phenomenon we are experiencing right now.

He presents hacking and security research in general
as simply finding exploits and running them, which
he implies in this quote: 

"teaching yourself how to hack is also part of 
the "Hacking is Cool" dumb idea. Think about it 
for a couple of minutes: teaching yourself 
a bunch of exploits and how to use them...".

That's not what most of the security researchers
do... as (most) people on this list know.

I love this statement:

"Wouldn't it be more sensible to learn how to design 
security systems that are hack-proof than to learn 
how to identify security systems that are dumb?"

It sure would... but that's not commercially
possible. Products of reasonable complexity 
would take too long to make and would be
prohibitively expensive. This is the exact
problem we covered in school way back when I was
getting my software engineering degree.
I guess majoring in psychology prevented Marcus 
from learning things like that early in his career...
and even know.

I personally do agree that, to quote Paul Melson,
"hiring 'n0t0ri0uz hax0rz'" is not always a good
idea. I'm referring to the case when a company
hired this guy who wrote Sasser. It doesn't take
much skill to take a HOD proof of concept and
add it to source code that they guy probably
got from somebody else. However, a legitimate
security researcher is always an extremely valuable
asset in a company.

Either way, what about QA... "hacking" (even if it
wasn't called this way) has always been a major
part of the QA process. Marcus is indirectly implying 
that the whole QA concept is useless... once again,
by saying that we should build perfect and hack-proof
systems... 

Sorry for a long post :-)

Kyle



Current thread: