Firewall Wizards mailing list archives

Re: Extreme Hacking


From: "Craig H. Rowland" <crowland () psionic com>
Date: Wed, 7 Jul 1999 00:10:54 -0500 (CDT)

De Ja Vu! I was just having this discussion the other day (Hi Diana)! I
think any security company releasing exploit information needs to really
consider this as a possibility. IMHO, unless absolute gross negligence
is proven on the part of the software development company with respect to
the hole, I think most juries would hold the *security company*
responsible for damages as a result of their actions. Before the comment
comes up, no I don't think buffer overflows and other common problems are
*gross* negligence. I consider them industry wide stupidity for relying on
1960's/1970's languages for 1990's software. We'll save that for another
discussion though.

I don't buy that, either.  Buffer overflows are gross negligance, and people
announcing vulnerabilities (with or without exploits) are just doing a public

I don't consider them gross negligence because they are too easy to do
with C and C++ which are the accepted "standard" in software development
today. Beginning to advanced programmers continue to make this mistake
over and over. It may be negiligent in that the software malfunctions, but
the overall commonality of the problem indicates that the practice is far
from deliberate and goes beyond programmer error and into antiquated
software development techniques which are common and accepted. Again, the
only gross negligence is our continued use of unsafe programming languages
for general programming tasks. Programs like your StackGuard are excellent
tools to move us forward. Clearly though you too found the problem common
enough to want to seek a blanket solution (i.e. it is futile to re-train
all the programmers and re-write all the programming books). Perhaps
negligence should fall upon compiler/OS developers too for allowing
overflows to work and not deployong StackGuard like techniques? The reason
I don't think buffer overflows are *gross* negligence is that there are
simply too many people to assign blame to.

service.  If tort law misguidedly starts assigning liability to the practice
of announcing vulnerabilities, then it will just go underground and be
announced anonymously.  If that practice is broken, then vulnerabilities will
go even deeper underground and only the bad guys will know about it.

I don't disagree with you on this. I think as a *security company* you
have a higher standard to live up to than someone outside of the industry.
My personal take on the issue is two-fold:

1) I don't mind individuals divulging fully detailed exploits if they feel
so inclined (although I think there are other ways to handle the problem).
2) I do mind security companies doing this.

Also tort law does *many* things that people don't like (in America at
least). While it may be nice to think that such a suit won't ever happen,
the reality is it probably will. This issue has transcended the mere
"information must be free" argument and has landed squarely in the realm
of good ol' corporate CYA. 

Yeah it stinks. Perhaps we should file a class action suit against the
American Bar Association for the mental anquish they have caused over the
years.

The recent disclosure of the eEye IIS 4 hole is a perfect example of
litigation waiting to happen against a security company.

eEye/IIS is a perfect example of a large company being whiny and blaming the
messenger :-) The only issue here is the pace at which eEye revealed the
vulnerability.  

I disagree with this. There are other issues:

1) They released fully working code before a patch was available.
2) They released fully working *variants* of the exploit after it was
established the hole existed.
3) The code included methods for immediate remote access which would have
been just as easily served by other non-intrusive methods for this
*particular* attack (sometimes this isn't an option though). 

Does this mean nobody else would have taken their information and done the
same? No. Does it illustrate the fact that as a *security company* they
were a little too eager and should have shown more restraint? Yes. 

However I digress because I happen to think their hack was very well
executed and their findings extremely relevant. The only issue I had with
the entire affair is the manner of disclosure, a position I still
maintain. In any event this is personal opinion and I don't mean to turn
this into an indictment against eEye.

There is a well-established protocol here:

  1. Discoverer notifies the vulnerable product's author/vendor.
  2. Give them about a week to provide a satisfying response.
  3. If no response is forth-coming, then announce the sploit to force some
     action.


There needs to be consideration of several other factors in your
protocol as well (especially with respect to point two):

- What is the likelihood the exploit is being actively abused?
        - Unknown exploits should grant the publisher more time to fix the
        problem (of course how do you determine what is and is not
        unknown?).
- What is a reasonable time estimate for full product patching, testing,
and deployment by the vendor in question? 

I.E.

From outside appearances it would seem that the time period for this
particular exploit was too short. Consider that MS must:

1) Diagnose and isolate the problem.
2) Develop a cross-platform fix.
3) Regression test the fix across all platforms and loads.
4) Package the patch and test across all platforms and loads.
5) Repeat steps 3 and 4 in the respective QA lab.
6) Distribute the patch and send warning.

Not being privy to MS development cycle myself, I can only speculate. I
would suspect that the above is a fair assessment however. Don't forget
the fact that they have over one million servers out there. It's not
a matter of hacking in a fix and sending it out. If it breaks
customers they are going to be plenty upset, it's basically a lose-lose
situation. Additionally I've found on many occasions that reporting a bug
to a vendor or author directly allows them to audit related pieces of code
and repair other *unreported* bugs as well which can consume even more
time but ensures the job is done right.

If the vendor doesn't respond it is reasonable to "out" them
so-to-speak. I think however that a heavy dose of discretionary thinking
needs to be applied before this step is taken.

This gets back to the open disclosure discussion, that is another
(off topic) subject altogether.

It sounds like precisely the open/full disclosure discussion, but I thought
that debate was settled long ago?  I'm shocked to find people still disputing
full disclosure.  Why not argue that the Earth is flat while you're at it?

I'm not against full disclosure if done responsibly. I do feel that in the
rush to gain fame and attention that patience and respect for the author's
work is often discarded. As a security company a higher standard of
responsibility and judgement must be used. I don't know how to say it any
clearer.


Crispin
-----
 Crispin Cowan, Research Assistant Professor of Computer Science, OGI
    NEW:  Protect Your Linux Host with StackGuard'd Programs  :FREE
       http://www.cse.ogi.edu/DISC/projects/immunix/StackGuard/


-- Craig




Current thread: