Firewall Wizards mailing list archives

Re: Firewall RISKS


From: "Stephen P. Berry" <spb () incyte com>
Date: Mon, 21 Jun 1999 12:35:50 -0700

-----BEGIN PGP SIGNED MESSAGE-----


In message <609B8FA424F9D111A91A00805FD71EE2CF46F5@dslexxch3.spsxch>, kevin.she
ldrake () baedsl co uk writes:

When a company writes an implementation of an internet service
(such as bind), the majority of the testing is against the requirements
of the service.  When someone tests a firewall (however it was built)
the majority of the testing is against possible security holes.

Question begging.  It obviously depends on the application and the firewall.
But even if we take it as read that firewalls are -more- tested than
application-level daemons, that doesn't imply that they are
necessarily -better- tested.  In other words, even if this it true,
it is quite possibly irrelevent.


[I'll forgo discussion of the bogus `statistically likely'
argument]

Maybe I don't understand the meaning of the words 'statistically'
or 'likely'.  In my mind, the first implies using historic data to
analyse patterns; the second implies a greater than half probability of
an event occuring.  The chances of the software you are running
containing bugs is, therefore, statistically likely.

I'm not convinced that it is worth my while to be arguing the -firewall-
issues here;  I'm overwhelmingly convinced that it isn't worthwhile to
argue statistics here.  In any case, the point is irrelevent to the
main argument, and to that end I've stipulated it.

Suffice it to say, then, that I call the argument `bogus' because attempting
to reduce something as complex as the security of a computer system/network
to a single summary statistic can only yield meaningless results in
this context.  Also, such a statistic does not necessarily translate
into a probability (as seems to be implied[1]).


Yes, bad software gets written.  Let's, just
for grins, posit that the latest bind (which is source available) and
the lastest squid (also source available)[-] are more buggy
than your average firewall product (which probably isn't going to be
source available)[-], and work (below) on this hypothesis.

And the point of the above paragraph?  Making the source
available does not make software any more likely to work.  It is all to do
with the evironment within which the software was developed and the
testing regime.

The guilty flee when no man persueth.  I stipulated the point.

You're right in noticing that I did throw in a heap of weasel words to
indicate that I'm not entirely happy stipulating it, but it's irrelevent
to the main subject of discussion, viz the necessity of firewalls.


As extra credit, what is the failure mode in the scenario you
present, how do you control it, and how does this compare to
the firewall-less scenario?

What you are doing is shifting your trust from the operating
systems of your working machines to the firewall that was built purely
to implement a security policy.  It is likely, then, that the
firewall will be more resiliant to attacks than your working machines.

This isn't an answer to the question(s) I ask above.

Again, your point being?

In pointing out that the response wasn't an answer to the question,
or in asking the question in the first place?

I get the impression that many firewall absolutists (in this discussion
and elsewhere) haven't thought about what exactly the firewall is
supposed to be doing and how it is supposed to be doing it.  As near
as I can make out, there appear to be lots of warm fuzzies surrounding
firewalls and lots of FUD around application-level daemons (me, I
tend to have fear, uncertainty and doubt about just about -everything-
in a security topology.  but then again, I'm a security fatalist).
Running bind 8.2 on an (wait for it) appropriately secured OpenBSD box
gets translated into `running probably flawed services directly to the
internet where every hacker can attack them'.  A firewall, I am told,
is somehow intrinsically better due to the presumed good intentions on
the part of the developers.

This is just wishful thinking.  Looking at the example of the machine
running named(8), the only interesting conceptual point to be made[2]
is in the case where the firewall in question is an application
gateway.  An interesting case can be made for independent reimplimentations
of application-level protocols as a security mechanism.  To the best
of my knowledge, this aspect hasn't even been alluded to in this
discussion by anyone other than myself.

I'd be interested in seeing the firewall absolutists attempt to make
a case for the necessity of firewalls based on specifics of intrusion
prevention, containment, failure mode control, u.s.w., rather than
(as has been generally done to this point) vigorous assertion.

That is, unless Mr Ranum kills the thread.  Which might not be a bad
idea from the looks of it---it doesn't appear that either side of the
discussion appears to be getting much out of it at this point.  Which
is a pity.


At any rate, I'm not sure that professed intent in a design can
be assumed to be translated into functionality in the implementation.
In fact, I'm pretty sure it can't be.  Anyone familiar with the
historical record will be able to supply counterexamples to your argument.

The above paragraph offers an opinion with no evidence or
argument to back it up.  If you wish to be taken seriously, will you please
provide the 'counterexamples' (sic) or explain your meaning of 'the
historical record'?

Erm...are you seriously contesting my contention that things don't
always work the way they were designed to work?  Pardon me if I observe
that this strikes me as slightly disingenuous.


The difference being that application-level daemons are not
primarily built to implement a security solution. 

Obviously this depends on the application-level daemon.


Been there - I'm interested in your definition of 'secure'
software now.
Oh, my mistake, chroot forms an E3 gateway between your
service and the rest of the operating system... hmmm... Or maybe
you are just blindly trusting it?

No.

I said that servers exposed to the outside world should be `secured
appropriately'.  You seemed (and, indeed, seem) to find the idea somewhat
ludicrous.  I was merely explaining what I meant.  Part of what
I mean when I say that a server should be `secured appropriately'
is that it should, if possible, be running exposed application-level
daemons in a chroot.


Again you appear to misunderstand.  Consider the following:
i) I decide to use your non-firewall method.  I expend an amount of
energy testing the daemons that I am exposing to the outside world.
I find a number of bugs (this is assuming that you are willing to expend
that level of energy.)  I can either, a) contact the vendor and
wait for a patch before iterating the testing; or b) using open source
products try and fix it myself before iterating the testing.  Both of
these appear to be time-consuming with a slow (bug-wait-patch-bug-wait-patch)
path towards a 'secure' piece of software.  or
ii) I decide to use a commercial firewall.  A large (several times if not
several orders greater) amount of effort has been expended by the
company who build it than I expend in i above.  The result is a
better tested and already fixed system.  How much further do I need to
go to explain the metrics of software correctness?

This is obviously just more question-begging.  Igoring the hidden
contention that (for example) bind, sendmail and apache are less well analysed
than (for example) Firewall-1 (unless your argument holds some caveats
which weren't apparent to me), this doesn't address the question of
what functionality you're expecting to get out of the firewall.

As I noted previously, if you're just doing packet filtering, even if
we stipulate that your firewall is entirely bug-free, this doesn't imply
that it's doing much of anything as far as protecting your application-level
daemons from being exploited.

If you're running an application gateway then you're assuming that
your proxy is better than the daemon itself.  As I said, whether or
not this is a good assumption will depend on the application and the
proxy in question.  You seem to be asserting that this will -always-
be true, as a result of some belief about the rigour of firewall design
and implimentation.  I do not find this a particularly compelling argument.

Beyond that, it's not even clear that even if you're running all inbound
data through a proxy that it'll help the daemon any in the case of an
attempted exploit.


You can
implement sane security policies without the use of firewalls.

You can, but it is all down to amounts of effort versus
results.

I read this as a concession of the point.


If I was connecting a system to the internet,
I'd go for a firewall any day.

As, generally, would I.  I do not make the mistake of contruing this
act of convienience and (to some extent) laziness to be an indication
of necessity.

I also, incidentally, think that the number of machines that can talk to
the internet (and _vice versa_) is much higher than it can or should be.  
There are comparitively few cases in which dedicated production machines (for
example) actually need (or should be able) to exchange data with external
hosts.







- -Steve

- -----
1     This is a distinction which is apparently quite elusive to people
      participating in online discussions.  And discussion in general,
      for that matter.
2     To my mind.  I'm willing to be proved incorrect.


-----BEGIN PGP SIGNATURE-----
Version: 2.6.2

iQCVAwUBN26UEyrw2ePTkM9BAQFIlwP/Zbg1CqS3G5W5PM7oGbQkEz5ZR8PqKkTO
iX9TEfgDePYW2eMTkUKyAu9Kw2kdnTnBN2bvO4MK0x160YtWCgF66EASV+mvHIH8
1R+LrrowFF6ddcWPEW0f7KBxao4IaAdqPEiKDb+xDDWLOan47NdIGUrcS6rES+jJ
ji5WNzcOXwY=
=R1kx
-----END PGP SIGNATURE-----



Current thread: