Firewall Wizards mailing list archives

RE: Practical Firewall Metrics (rant)


From: "Biggerstaff, Craig T" <Craig.T.Biggerstaff () usahq unitedspacealliance com>
Date: Mon, 23 Feb 1998 15:51:26 -0600

:set nolurk

I think if we're going to get anywhere useful with certification of
firewall systems, the basic building blocks such as packet filtering,
NAT, proxy forwarding, etc., have to be defined.  For example, an RFC
for packet filter behavior, an RFC for NAT behavior, an RFC for proxy
behavior.

Without a lowest common denominator of *requirements* (as opposed to
mere *good features to have*), we're going to run into the old problem
of vendors that advertise "passes 95 out of 100 tests."  Further, any
RFC requirements should be on the easy side.  "Doesn't withstand
prolonged FBI siege" should not be counted equal to "Leaves key under
doormat".

If you load up a test suite with uber-sophisticated attacks, it may
illustrate vulnerability, but it probably won't encourage widespread
compliance.  Dan Farmer's Satan adventure proved that, given half a
chance, organizations would rather stop the *exposure* of vulnerability
than stop the vulnerability itself.

:set lurk


-- Craig Biggerstaff
craig () blkbox com

----------
From:  tqbf () secnet com[SMTP:tqbf () secnet com]
Sent:  Friday, February 20, 1998 9:36 PM
To:    firewall-wizards () nfr net
Subject:       Re: Practical Firewall Metrics (rant)


I'm reading with interest the comments of people on this mailing list with
regards to testing/certification/benchmarks of firewall systems. I think
I'm already pretty familiar with Mr. Ranum's take on this (having read his
paper), but it's enlightening to see what other people thing. Thanks for
starting an interesting discussion.

In any case, I work for a company that (obviously) pays quite a bit of
attention to security technology assessment, and firewall metrics and
tests interest me. I don't think firewall "certification" is impossible,
as long as you understand going into it what it is you're certifying. 

There are, as Mr. Ranum and others point out, quite a few obstacles to
meaningful firewall certification, just as there are for meaningful IDS
certification. I think this is the case because the system, as a whole,
can be evaluated on many different levels. Is it configured right? Do the
configuration options work the way they're intended? Does the code
underneath that work? You go on and on until you hit the lowest level
where the attackers in your threat model can influence the system.

It is apparent to me that it's unrealistic to try to certify a firewall as
a whole system; I agree (I think) with Mr. Ranum that doing this would
require certification of ever installation of the product, and, while that
might be a nice scam to get into, it's not practical for an organization
that would be thinking about trying to drum up support for a free
certification initiative.

What I'm interested in is evaluation of the network "engine" (excuse me
for using that term) of a firewall. To take a simplified example, I'd like
to see widely accepted tests for static packet filter implementations (can
I get past it with IP fragmentation? does SYN+FIN slip past an
"established" filter?). I think it's reasonable to say that a test suite
which limits itself in scope to the (say) packet filter implementation
would have some value.

Of course, this applies to other types of access control techniques as
well. A NAT firewall implementation could test the behavior of the system
under extreme load, or it could configure an inbound translation and test
the conditions which cause packets to get rewritten and retransmitted onto
the network behind the firewall.

I guess the basic thought that comes to mind (and I'm sure it's an obvious
point) is that many different implementations of the same technology
(stateful filters, application proxies, etc) share the same types of bugs,
due to common implementation mistakes (the fixed-offset problem for
catching IP options, for instance). Whenever we identify a new problem
like this, we can try to ascertain the implementation mistake that was
made that allowed it to work. Chances are, it'll be some simple
characteristic of the software implementation (for instance, "can't spot a
source route option unless it's in the byte after the fixed-length IP
header"). Once you come up with a characteristic of the system you want to
test, it's not all that difficult to look at how the system is configured
and devise a test that determines whether that characteristic is present.

What I am thinking here is that you can get a valuable metric of a given
type of firewall if you know a set of characteristics that lead to
security problems, and can devise tests for each of those characteristics.
The actual code to implement the tests may not be the same from system to
system (ie, my Lucent Managed Firewall stateful filter suite may have to
be completely re-implemented, even after coming up with a test suite for
Firewall-1), but I really don't see this as a particularly complex problem
(compared to the difficulty of coming up with the characteristic to test
in the first place).

The "certification" then becomes the list of characteristics tested
("deals with IP options correctly", "can correctly handle fragments in 
these cases", etc), and the specific tests used to verify each of them.
The characteristics are shared in common over all systems of the same
type, and the tests themselves are (probably) custom-written for the
specific system tested.

The obvious difficulties are that you need to figure out what the valid
characteristics to test for are for each type of firewall you purport to
evaluate (the requirements of a stateful filter are entirely different
from those of a system reliant entirely on NAT); moreover, some firewalls
may overlap into multiple categories (a NAT firewall that does static
packet filtering for inbound translations). 

And, of course, you need to write test tools. I see this as becoming less
and less of a problem as time goes on. We've seen a steady progression in
the quality of testing tools, from Reed's initial ipsend, to CAPE, to the
most recent ipsend, Sun Packet Shell, SNI CASL (which will be free for
projects like this), etc. Additionally, alot of the methodology for
constructing tools is easily carried over from one tool to the next (ie,
CAPE was, I believe, novel in that it "signed" wacky packets before
sending them through packet filters, and relied on a special sniffer on
the other side to pick out which packets made it through --- this would be
easy to write in Tcl/Packet Shell too). 

Remember, though, that I'm a software developer and see everything through
the rose-colored glasses of Emacs and font-lock-mode. I'm often surprised
by how long things that seem relatively straightforward on paper take to
implement in the real world. 

Really, I think the first step that has to happen before anything can move
forward is some qualifications for what it is we'd be attempting to
certify about firewalls. The scope needs to be defined (and refined
through discussion) before you can really even start to think of how
testing would work.

However, one thing I don't really agree with is that "testing firewalls is
hard, maybe impossible, hence we shouldn't really spend much time working
towards it" (nobody is making this point yet). Besides the
(DOD-sponsored?) MITRE evaluations, I don't know of any techically
credible effort that has been done to define a set of cross-platform
firewall tests. I think this'd be worth doing, if only to come to a firm
conclusion that "it's not worth it".

-----------------------------------------------------------------------------
Thomas H. Ptacek                                       Secure Networks, Inc.
-----------------------------------------------------------------------------
http://www.enteract.com/~tqbf                          "mmm... sacrilicious"





Current thread: