WebApp Sec mailing list archives

Re: Threat Modelling


From: "Frank O'Dwyer" <fod () littlecatZ com>
Date: Mon, 24 May 2004 00:33:26 +0100

Mark Curphey wrote:
[...]

I actually think you probably hit the nail on the head when you talk about
"applications of this class". The detail you elude to I have never seen in
any RA tools. If it were it would be able to support the threat modeling
process of the type defined in writing secure code etc I think that would be
great. Hopefully the MS tool or others that I think are brewing will.
OK in that case I think this distinction you are making between RA and TM is just a matter of terminology - i.e. we are using these terms differently. I also think you have CRAMM in mind when you think of RA tools - our system is considerably simpler than CRAMM, but then it is also considerably simpler than TM systems which use attack trees and so on. In fact our approach was motivated largely by scepticism of such formal models, and CRAMM in particular, which tend to be soggy and hard to light in the real world, and which are often based on questionable monetary valuations and likelihood information in the first place. Yet, the kind of technical detail you refer to is in our system, as well as higher level stuff. So maybe we are just talking at cross purposes and are actually in violent agreement!

Anyway hopefully this will be easier to see when we make the actual code and method available in the coming weeks, and you can try it for yourself. Our aim is to make this an open process that anyone can contribute to and that anyone can use. Most importantly, we are opening our content under a liberal license so that anyone can change it or add to it and contribute anything they think is missing back in there. We have already gone some way towards this with our OSSS policies and standards (which are at www.littlecatZ.com/standards/). In the free version of our tool which I have mentioned, we are making machine readable versions available too (more on this below). We don't consider our approach foolproof or complete by any means, but we do consider it useful for building secure systems. Hopefully you will too.

[...]Actually I am sure a set of criteria and
a methodology can be developed that would fit under various tools frameworks
today, after all what we are discussing is a subset of the bigger picture,
higher level risk assessment.

Exactly. That's the approach we've taken. We aim to derive detailed technical and non-technical security controls alike from a simple(ish) business and technical profile of an existing or proposed system. We believe both technical and non-technical controls are necessary. You appear to be looking for a system that only derives technical controls from technical information - and if so, that's fine, our system does that too. We just don't use a bulky formal model of any kind, and nor are we CRAMM - we use a very simple and pragmatic approach which is (both in terms of approach and results) roughly equivalent to what a decent security consultant would do if given a few days to look at a system. We believe this is good enough to be applicable to at least 80% of cases and to bring to light glaring errors such as the 'password sent in clear' or 'DNS used for authentication' issues that you mention - which is probably all that is reasonable to shoot for in an automated approach, without getting into formal models that are too heavy to fly for anything more complicated than a "hello world" program. So, although I would refer to our approach as RA, I don't think the label actually matters. To give you an idea of how formal our system is, I would say that it is is something more heavy duty than a "wizard" and something simpler than an expert system.

Maybe it would also be helpful if I elaborate on what we mean by a "security control" - at the high level this could be anything from "Don't send passwords in clear" to "Ensure that account privileges are revoked when an employee leaves" to "Use strong authentication" to "Make sure that extended privileges are dropped by servers when no longer required" to "Use a safe replacement for strcpy()" - together with specific conditions that trigger the issuing of this recommendation/requirement, detailed 'how to' steps for how to actually implement the control, and detailed motivation for why the control should be there (risks/threats addressed and so on). A control could also be a specific interpretation of a higher level control, but applied to a specific technology, and triggered only when this technology (or a specific version of it) is in use - so that (for example) "Don't send passwords in clear" could be interpreted as (say) "Ensure that SOAP parameters do not include passwords unless the connection is protected using SSL" in the context of an application using web services. This control could also be set up so that it only appears if the environment includes an untrusted network. This kind of analysis is reasonably crude and coarse grained, but so far we've found that it can cover enough cases to be useful.

All of this is data-driven in our system. We express controls in documents we call SKML documents (for Security Knowledge Markup Language). This is actually an extended version of docbook/XML with extra markup to indicate which pieces of text are security controls. On the one hand, these documents correspond to standard high level and detailed technical security policy documents, and could also correspond to something along the lines of recommendations derived from the OWASP top 10 (which by the way we are actually working to express in SKML as a further proof of concept). Because SKML documents are also close to docbook documents, they can be processed using XSL and docbook tools to generate the usual human-readable documents (PDFs, HTML, etc) that people are already familiar with. However the same document is structured enough to be machine readable and can be treated as a database from which appropriate controls can be extracted by a tool. This means that the human readable documents and the control database can be automatically maintained in synch from the same source, kept in CVS, built using ant/makefiles, sliced and diced by XML tools, etc. - this is actually what we do for our own OSSS content. A lot of our analysis logic is simply repeated XSL transformations on SKML source documents, resulting in a report that is also in XML.

Cheers,
Frank

[...]

--
Frank O'Dwyer      <fod () littlecatZ com>
Little cat Z       http://www.littlecatZ.com/

This message is intended only for the use of the addressee and may contain information that is privileged, confidential 
and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the 
employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly prohibited. If you have received this e-mail 
in error, please notify us immediately by return e-mail and delete this e-mail and all attachments from your system.


Current thread: