Secure Coding mailing list archives

SC-L Digest, Vol 3, Issue 197


From: karger at watson.ibm.com (karger at watson.ibm.com)
Date: Sun, 02 Dec 2007 18:07:00 -0500



andre at operations.net writes

Extended criteria to the above would involve designs/code that are
formally specified or verified (using formal methods/logic).  Think
Orange Book (TCSEC, ITSEC, Common Criteria, et al) and you'll know
what I mean by this.  The interesting thing about Orange Book is that
even the most assured/functional division, A1, only requires that the
formal top-level specification (i.e. TCB design aka security policy
model aka access-control matrix) be formally verified.  Criteria that
goes beyond A1 would formally specify and/or verify actual source
code.

The Orange Book explicitly wanted to include only requirements that
had been shown to be feasible at the time the criteria were written (1985).
At that point in time, formal verification of design specifications
was clearly feasible (although difficult), but formal verification
of sizable amounts of source code was clearly not yet feasible.  Even
today (2007), formal verification of large amounts of source code is
still very difficult.  That is why the Orange Book and all the subsequent
criteria have open-ended scales.  As the state of the art improves, the
intent has always been to define A2 or A3 or EAL8 assurance levels.

Note that my five-star rating system suggested is only half of a
secure software initiative - the assurance part.  According to the
Orange Book, there has to be functional measures that placate the
inherent problems with trusting the TCB: object reuse and covert
channels.  The Orange Book specifies that the TCB requires a security
kernel, security perimeter, and trusted paths to/from the users and
the TCB (input validation is referred to as "secure attention keys").

Not only does the access control matrix have to be well designed and
implemented, but there also has to be safeguards (today we sometimes
call these exploitation countermeasures) to protect against problems
with object reuse or covert channels.

This is why the Orange Book explicitly required specific access control
systems.  



Access control systems can be programmatic, discretionary, mandatory,
role-based, or declarative (in order of sophistication and assurance).

In the Orange Book, the access control is specifically a mandatory system
that has effective proofs already done of the model.  The Common Criteria
leaves this up in the air, and relies on the developers and evaluators to
actually design an effective and consistent access control system and to
prove its properties.  It is not clear that the Common Criteria process
has always met that goal!  There are many loopholes that were not present
in the Orange Book.

Transitioning between states is possibly the hardest thing to get
right in terms of security in applications - fortunately it is
something usually taken seriously by developers (i.e. vertical and
horizontal privilege escalation), but strangely less-so by security
professionals (who often see semantic bugs like buffer overflows,
parser bugs, and input/output issues to be the most interesting).
This is probably why the TCB is the only part in the Orange Book
criteria that requires formal specification and/or verification to get
to the highest division (A1).

The TCB in the Orange Book contains ALL the security sensitive code - not
just the kernel of the operating system.  Applications that enforce security
properties are also part of the TCB.


Object reuse and covert channels provide the landscape for security
weaknesses to happen.  Object reuse means that because memory is
shared, things like buffer overflows (stack or heap), integer
vulnerabilities, and format string vulnerabilities are possible.  When
a disk or memory is shared, it can be written to or read by anyone
with virtual access to it (depending on the nature of the filesystem
and how the TCB grants access to it).  Covert channels provide access
to IPC (inter process communication), both inside and outside of the
system via TCB access rights.  Additional problems of storage and
timing channels also fall into this category.

Of course, the proper way to prevent misuse of objects or channels is
to assure the TCB design and the source code.  However, by using
security perimeters and trusted paths (along with a reference monitor,
or security kernel) - we can provide the "functionality" necessary to
further protect these needed elements.  The same is true with
automobiles: in the form of seat-belts + airbags, as well as optional
components such as harnesses, roll cages, helmets, driving gloves,
sunglasses, etc.

Modern day exploitation countermeasures and vulnerability management
are these security functions: software-update is your seat-belt, ASLR
your airbags, and optional components include GPG, SSL, anti-virus,
patch management, firewalls, host or network-based IPS, NAC, DLP,
honeypots, etc.  I like to compare the optional components to the
highway system and traffic lights when provided externally (e.g. from
your IT department or ISP).

It may appear that I'm suggesting an end to penetration testing (which
probably isn't popular on this mailing-list anyways), but that's not
entirely true.  Automobiles still require the occasional
brake/wiper/tire replacement, tune-up, oil change, and even weekly
checks such as tire pressure.  Security testing in the operations or
maintenance phase will probably remain popular for quite some time
(especially the free security testing from the vulnerability-obsessed
security community).


The Orange Book and the Common Criteria explicitly require penetration 
testing, particularly at the higher levels, as well as lots of other
techniques.  This is because you can never assume that just a single 
security measure will solve all problems.  Even formal proofs can have
bugs.  High assurance uses formal methods and testing and penetration and
development procedures, etc. as a combination of methods is much less
likely to leave latent problems that relying only on a single method.
I don't think that security testing will ever not be necessary, even
with full code proofs.  Proof tools can have bugs and they only show
corresponce to specifications.  Specifications can also have errors.

Paul


Current thread: