Vulnerability Development mailing list archives

Re: Behavior analysis vs. Integrity analysis [was: Binary Bruteforcing]


From: "auto12012 auto12012" <auto12012 () hotmail com>
Date: Thu, 28 Mar 2002 08:12:41 +0000

Hello?

Hi.

I think you do not really understand what I was trying to say - try
'Turing "halting problem"' in google.com. Nowadays, the analysis of all
possible execution paths for any given program (for example to find an
answer whether certain code will be executed or not, and in what order - a
critical issue for static, automated detection of dynamic vulnerabilities)
is excessively time consuming and completely not feasible in most cases.

There are certain specific cases when this is not true, and certain very
limited scopes we can accurately examine (some applications do - e.g.
compilers try to elliminate dead code, some source code audit applications
look for potentially vulnerable patterns and apply certain heuristic
algorithms to decrease the number of false positives, etc). But there's no
way to universally predict the behavior of a complex application.

True. I would add that the final execution path, the most critical behavior of all, and also the most difficult to evaluate in a dynamic system, is whether the subject is to be hostile or not. That is much harder to determine than whether a number is prime or not.

However, what does that have to do with pinpointing vulnerabilities? Vulnerabilities are not about how a process is to behave, but based on what information - and its integrity - it is to behave. Predictable behavior, in the context of security analysis, only gives you the ability to increase granularity. It is not a requirement for any security analysis.

I do not see, then, how vulnerabilities are linked to execution paths. An application does not become vulnerable simply because it took execution path A instead of path B. It was already vulnerable in first place because it based its decision on untrustworthy information.

Basically, there are two main problems with formal analysis - identifying
the potential vulnerability (which requires a formal model of accepted or
unacceptable behavior),

"first problem to getting something is knowing what you want"

and determining whether it can be an actual risk
(which can be reduced to the halting problem, pretty much).

Risk must be determined by trust, not behavior. Then it's not a problem anymore.

Example: there is a program that is checking whether some <<insert your
favorite big number here>>-digit number is actually a prime. If so, it
enables you to exploit a buffer overflow (let's just say "enter endless
loop"), if not, it simply dies with some error message ("stop"). It would
take the program 10 years to check this prime. It uses the best prime
checking algorithm we know at this point. Do you know a way to tell
whether the program will stop or not (will be vulnerable or not) without
actually wasting ten years (or comparable amount of time) on your computer
to check it?

No. And not really interested in knowing. The integrity of my process comes to rely entirely on that function whose output is unknown. You trust it? I would not. In a security setting, that would be the exact thing I would consider a vulnerability in my integrity model.

Do you know what would be the implication of having a "yes"
answer to this question?

Lie.

And that's not all - our "endless loop" condition is pretty clear in this
particular case, but sometimes it is not that trivial. Think about
authentication bypassing, etc. There needs to be a model of an erratic
behavior for this particular program, and the model itself must be
designed without flaws, which is not trivial. But I am not gonna argue
here - it is certainly doable and being done - it is just expensive, time
consuming and prone to problems.

Most likely been done already. In the 70's. At the time when information security was a science.

Knowing the way to solve the halting problem for infinite Turing machines
in finite time would very likely enable us to perform this analysis for
finite machines in no time (and have dramatic implications not only for
computer security).

That is why I find the claim "finds all exploitable bugs" is at least
ridiculous -

Notion of 'exploitable bug', as seen on this mailing list, is ridiculous itself. I guess all concepts derived inherit from the property.

it implies that both problems were solved. It can find "some
potential bugs caused by certain command-line and environment variables
interaction", but not "all exploitable bugs in the application".

Regards,


_________________________________________________________________
Send and receive Hotmail on your mobile device: http://mobile.msn.com


Current thread: