WebApp Sec mailing list archives

Re: one use for taxonomies


From: "Frank O'Dwyer" <fod () littlecatZ com>
Date: Tue, 19 Jul 2005 03:56:45 +0100

Paul B. Saitta wrote:

We all like to think that the definition of the problem is patently obvious.
It's not.  Determining where you need your system to have which properties and
when they need to be enforced can actually be rather difficult, especially in
non-trivial systems.
 

The implication here is that formal threat modelling either makes this
easier, or produces better results. I don't see much evidence of either.
Things may improve, but right now it is simply a more costly way to get
the same result. Nobody is suggesting the problem is easy or obvious,
either. I am just noting that despite the claims for formal threat
modelling, I can see people following a different (and less expensive)
recipe and baking the same cake (which is because they also do threat
modelling, they just don't formalise it).

This type of debate has occurred before, in the area of reducing program
bugs. There was a time when it was believed that the way forward was to
formally prove all programs correct. However for anything more
complicated than a bubble sort, it soon became apparent that with manual
methods this would not fly. Then it was believed that it could be
automated, that all that was missing was better tools, better theorem
provers, tool support for pre and post conditions in programming
languages, etc. But fundamentally the whole effort fell foul of the fact
that it was more or less exactly as difficult to come up with the proof
for a loop, as it was to write the loop itself. In short, the proof was
pretty much as likely to be wrong as the program was.

That's something you have to consider in formal threat modelling, but I
don't see anyone doing it - what's the likelihood that you simply focus
on the wrong threats, miss one, assign the wrong likelihoods, model the
system incorrectly, or implement the formally analysed model
incorrectly, etc.  GIGO.  You have to recognise that you and your model
are actually a part of the system you are trying to model. Not only
that, but any changes you make (mitigations) are going to introduce new
risks of their own, and will introduce feedback loops whereby attackers
modify their approach. This needs to be modelled also. Some security
requirements are even in conflict (availability and everything else, for
example). Risks must be traded off against benefits. These hard problems
exist for every other approach too, of course, but the point is that the
'formal' in formal threat modelling does not make them go away.
Therefore any claim of completeness or better results seems a bit hasty
to me.

The key part of threat modeling is not deciding which mitigations to use to
deal with a given weakness and prevent an attack from succeeding, it's figuring
out what you need to protect in the first place, and where those mitigations
need to be used.
 

That has always been a key part of *any* reasonable approach to
security, not just formal threat modelling (except that I would say that
a better focus is to justify when *not* to use a control, not when to
use it - basically the same thing however it puts the focus on the risky
decision, which is the decision not to protect something)

[...]

The other reason to do formal analysis is to ensure completeness.  Any time
you're using an ad hoc method in a security setting, you've lost.  You may get
it right 99% of the time, but the attacker only needs you to screw up once.
Would you trust crypto without formal analysis of its strength?  No. 

Actually not only would I, but I do, and so do you (trust in the sense
of 'rely on'). This is actually an excellent example to consider. If
crypto were a matter of formal analysis, then we would simply do that
once, on a handful of algorithms optimised for various purposes, and we
would be done.

Instead what do we see - crypto analysis has been going on for decades
and shows no signs of ever finishing. This is precisely because crypto
is *not* formally analysed like that, presumably because it cannot be,
or we do not yet know how. What makes us trust in crypto is the
collective might of a massive, decentralised, and ongoing open peer
review process. Not only is that 'ad hoc', and not following any
particular formal method that anyone can point at, nobody is even in
charge of it.

Things get even more hazy when you consider implementations of crypto
rather than abstract algorithms. Right now your life and finances most
likely depend on any one of a few dozen crypto implementations whose
hardware and code you've never seen and whose existence you are probably
unaware of. Do you trust them? Yes, de facto. Have you, or can you,
formally or otherwise analyse the threats they present to you? Again,
yes, but only based on some educated guesswork, and the admittance of
many unknowns that will most likely make your formal model degenerate
into something much more approximate and uncertain..

Cheers,
Frank

[...]


Current thread: