Secure Coding mailing list archives

Provably correct microkerne


From: karger at watson.ibm.com (karger at watson.ibm.com)
Date: Mon, 05 Oct 2009 08:56:30 -0400



Bobby Miller writes:

I might argue that it may fix problems that aren't fixable otherwise.
My experience in this area is very old, but I found that the biggest benefit
of formal methods was not so much the proof but the flaws discovered and
fixed on the way to the proof.



In conclusion, it seems an awful effort to fix half the problem, I'd
expect,
though cant prove, that a combination of other secure development processes
working together will get better results with less overall effort.

CJC



The significance of this result has to be understood in the context of
building high-assurance software.  In no way are proofs of correctness
the end of the story on the security of a particular piece of software.
After all, even proofs can and do have bugs.

If you look at how the Orange Book, the ITSEC, and the Common Criteria
approach high-assurance security, formal methods are just one part 
of the solution.  

My own experience can confirm that just constructing the formal
specifications without even a single proof can turn up serious
problems.  I've seen this multiple times on completely different
high-assurance systems.

Achieving a high degree of confidence in the attack-resistance of a
particular piece of software requires MANY kinds of efforts - careful
design, careful written specifications, strong coding practices,
extensive testing and penetration analysis, formal modeling, formal
specification, proofs of specs back to modules, and what NICTA has
demonstrated, proof of code back to specifications.  No one of these
techniques by itself is the magic bullet.  You need to apply ALL of them.

Note that this is also why I question the validity of so-called "proof
carrying code".  This approach places ALL reliance on the proofs, and
as I said, proofs often have bugs!  The Common Criteria approach of using
proofs in conjunction with lots of other software engineering techniques
seems much stronger to me.


The point of proving a microkernel is that if you can show that the 
microkernel is secure, then you don't have to prove many properties
about software that runs on top of it.  But even that doesn't mean that
you don't have to prove anything about upper level software.  If the
micro-kernel security policy is sufficient to meet all your security
needs, then that is sufficient.  In particular, protection of secret
information is a policy that can be enforced by a microkernel, or (using
the term that I prefer) a security kernel.  The Bell and LaPadula 
model is such a policy that can be completely enforced by such a 
security kernel (modulo covert and side channels, of course).

On the other hand, a security kernel cannot enforce a security policy
that requires application correctness, such as protection of financial
transactions.  To do that, you have to prove properties about the
transactions themselves.  (At the least, that if you add 2 + 2, that
you actually get 4.)  Clark and Wilson showed that distinction in their
classic paper.

Clark, D.D. and D.R. Wilson. A Comparison of Commercial and Military
Computer Security Policies. in 1987 IEEE Symposium on Security and
Privacy. 27-29 April 1987, Oakland, CA: IEEE Computer
Society. p. 184-194.

What the folks at NICTA have accomplished is very significant - the
first time that a full microkernel has been proven at the code level.
Earlier such proofs were all at the formal specification level.  This
type of code proof is what the original Orange Book talked about
as "beyond A1".  

The most interesting next step would be to take the seL4 microkernel 
(with its proofs) and conduct an EAL7 Common Criteria evaluation of it
in a real system context, and to determine what new requirements could
be met in a future EAL8 evaluation.  

Paul Karger
  




Current thread: