WebApp Sec mailing list archives

Re: one use for taxonomies


From: "Frank O'Dwyer" <fod () littlecatZ com>
Date: Sat, 16 Jul 2005 09:54:18 +0100

I think that modelling the system is a SDLC problem for which the main
criterion is "whichever one allows you to build the system with the
highest quality (least bugs)". In other words it is a classic software
engineering question rather than something that security requirements
change much. There isn't a great deal of difference between how a
software engineeer would frame the problem and how a security engineer
would, i.e.:

Software engineer: Implement the business requirements/rules

Security engineer: Implement the business requirements/rules, and only
the business requirements/rules

I don't believe that what you need to do there should vary much from
application to application either - as you would expect since software
engineering also has it's 'greatest hits', in other words there really
aren't that many genuinely novel ways to go about capturing
requirements, and implementing an application, both in terms of process
and architecture, and in terms of doing it well. I believe you will get
a lot more mileage from considering general cases and coming up with a
process that delivers a good outcome for all of them, than you will from
analysing individual apps to that level of detail. Why pretend there is
a lot of variety here when at the right level of abstraction there
really isn't. On the other hand, if you *do* see a lot of variety, then
that's a good sign that you are operating at too low a level, or
attempting to over-optimise for particular cases.

Maybe an analogy helps. For taxonomy folks, look at the animal kingdom.
We don't yet know all the species that exist, or that have existed, new
ones are discovered all the time. Yet, how many times have you heard
about a biologist saying something like "hey chaps, come over here and
look at this! I think it's a new phylum!". That hardly ever happens.

Security is similar (apart from the fact that biology has some kind of
objective tree, and I don't think we do). We hear about new examples of
buffer overflows all the time, but the underlying issue has been around
since Christ was a cowboy. Also, the best defences against them,
whatever you consider those to be, haven't changed in a coon's age
either. Consequently vulnerability reports along the lines of "hey, this
app has a buffer overflow, too!" are a massive yawn. But when was the
last time your threat modelling threw up a genuinely new type of threat
or attack, one for which you had to go away and dream up a wholly novel
countermeasure rather than using one you already knew about? That rarely
happens at all, and usually only in the context of general consideration
of a type of problem, rather than a particular application. The last
time I can remember a 'new phylum' type of event is Paul Kocher's timing
attacks, and all the related side channel attacks which appeared in the 90s.

Another great example from the animal kingdom is the duck-billed
platypus, which had a muzzle like a duck's bill, a tail like a beaver
and which laid eggs but suckled its young. When biologists first
encountered this they had a coniption fit, because it when they tried to
fit that into their taxonomy at that time, it blew up. Their initial
response was go into denial. Rather than modify their taxonomy they
preferred to deny that the duck-billed platypus was a real animal, and
thought it was an elaborate hoax. The multi-factor
vulnerabilities/threats we've talked about are a bit like that, they
tend to get shoe-horned into the taxonomy somewhere just for the sake of
maintaining an orderly tree, whereas maybe the problem is there is no
family tree here, and you need something more like a graph or a web.

Cheers,
Frank

Zhiguly wrote:

Great discussion...

So the focus of developing these TMs (or taxonomies, are they are
now--I was always *concerned* when my attack trees degenerated into
threat/vulnerability taxonomies and hung my head in shame) exclusively
model the threats/vulnerabilities/countermeasures--not the system
itself. How are folks modeling the system? Of course this, too is a
slippery problem, especially if we want to keep the approach
lightweight... I don't think UML is the answer (see CORAS as one
attempt), because in my fantasy world I don't see it being used... but
I digress.

Attack trees as described by Schneier (and probably the CERT papers,
as well, but I can't remember) seem to completely ignore the system
representation problem (let alone modeling it) right? I know the MSFT
book (and tool?) went farther with DFD's and all that but I don't
think the tool let you do much or at least I couldn't without Visio :(

An AND/OR event tree (aka attacker goals strung together) cannot
adequately capture hierachies of components and subcomponents, data
flows, data at rest (perhaps inheriting the security protection of the
component it belongs to, etc.), functions (perhaps call them
operations or services or whatever), various system states, and then
of course each of these "system objects" has different taxonomies of
threats/vulnerabilities that can be applied so that when a compromise
occurs to one object (whether unauth performance of an operation,
denial of service, information disclosure) there is a ripple effect on
all the other system objects... Yes, now I am confused too.... In
short is anyone else trying to start with some more formal (but not
*too* formal) system model and then apply the threats?

- zh

On 7/15/05, Frank O'Dwyer <fod () littlecatz com> wrote:
 

Brenda wrote:

   

Andrew,

I completely agree that the point of threat modeling is to analyze
business risks, and I also agree that as currently formulated, a threat
model with lots of technical details is difficult to use for business
risk analysis.

     

I'd like to suggest a different and possibly heretical view, which is
that maybe you don't need an (explicit) threat model at all. My reason
for saying that is for any of the formal analysis I've seen, it doesn't
lead to a different outcome, compared to much simpler and less formal
approaches (it depends on the approach, of course).

If you step back and look at the ultimate goal, it is to implement a
system [or a system dev lifecycle] with effective countermeasures
(technical and non-technical). The question is, which countermeasures
will those be? Most formal analysis tends to consider that question
along the following lines (hugely simplified):

1. Consider business impact
2. Consider attacks
3. Consider vulnerabilities
4. Consider likelihood
5. Build a threat model
6. Prune attack tree
7. Generate countermeasures to block remaining attacks
8. Implement the same old same old, the same stuff you would have
implemented if you hadn't done any of that.

Basically, you could just proceed direct to step 8 (I'm exaggerating,
but not all that much).

The thing is if the sort of question you will ultimately answer is
something like "gee, I wonder which transport security protocol we will
use THIS time. Will it be SSL/TLS, or, um...I'm sure there was another
one" - well, why not just cut to the chase?

Or what about authentication, which of the 2 and a half practical
options will you use there?  Are buffer overflows still a problem, or
has that changed since the last time we built a threat model? Hmm, might
we benefit from a firewall? Input validation? Audit trail? Encryption?
And so on. You are basically choosing between well known security design
patterns, or "security's greatest hits", and actually there aren't all
that many of those (certainly a great many less than there are detailed
attacks and vulnerabilities).

(I'm not saying threat models aren't ever useful, by the way, just not
necessarily for the problem they are put forward for. One area where I
think they could be useful is arriving at some formal justification of
which countermeasures are good (for example in the sense of reduction of
attack surface), although in many cases we already have a reasonable
hunch about the answer there too, still a formal justification would be
nice. I'm guessing you don't need an actual app to analyse there,
either. You could perhaps generate trees of attacks at random, and see
which countermeasures work best).

I guess what I am really saying here is that since you generally want
the sort of countermeasures that operate at a high level in the tree,
and therefore counter everything below, plus other things as yet unheard
of, you don't need to build a deep tree. Or at least that type of
chunked up , fuzzy model will get you most of the way there. The rest,
as you say, is an art.

Actually I can think of very few factors that DO lead a variation in
outcome. Off the top of my head, these are:

1. The overall worst case business impacts for confidentiality,
integrity, and availabilty (for those who wish to focus more expensive
and effective controls on potential high value losses)
2. The technology/architecture you use (affects how you implement the
same old countermeasures, also not using a technology means you can
discard threats - and hence countermeasures - that are only relevant for
that technology)
3. The environment your app will deploy in (this is another shorthand
for a very chunked up threat model - different environments have
different threats, so not being in a particular environment means not
having to care about a basket of threats peculiar to that enviroment,
and so the associated countermeasures aren't relevant either)
4. The infrastructure you use (infrastructure may implement
countermeasures so you don't need to)
5. A handful of well-aimed questions which may indicate you need less or
more countermeasures

One way to implement that whole process is to start with the complete
set of countermeasures you can ever imagine being useful, and
systematically delete those that be easily justified as not being
relevant to the case in hand, or which don't have the right
cost/benefit. And actually a lot of that can be automated too. I've got
some code that does a proof of concept of that, which I intend to
release (GPL) as soon as I get a minute to do so. It would be
interesting to see if your code and mine could be made to use the same
knowledge base of countermeasures. It is all XML based so maybe some
kind of mapping is possible.

Cheers,
Frank.

[...much good stuff deleted ...]

   



Current thread: