Firewall Wizards mailing list archives

RE: Firewall RISKS


From: "Sheldrake, Kevin" <kevin.sheldrake () baedsl co uk>
Date: Wed, 23 Jun 1999 14:03:39 +0100

-----Original Message-----
From: Stephen P. Berry [SMTP:spb () incyte com]
Sent: 21 June 1999 20:36
To:   kevin.sheldrake () baedsl co uk
Cc:   firewall-wizards () nfr net; spb () incyte com
Subject:      Re: Firewall RISKS 

In message <609B8FA424F9D111A91A00805FD71EE2CF46F5@dslexxch3.spsxch>,
kevin.she
ldrake () baedsl co uk writes:

When a company writes an implementation of an internet service
(such as bind), the majority of the testing is against the requirements
of the service.  When someone tests a firewall (however it was built)
the majority of the testing is against possible security holes.

Question begging.  It obviously depends on the application and the
firewall.
But even if we take it as read that firewalls are -more- tested than
application-level daemons, that doesn't imply that they are
necessarily -better- tested.  In other words, even if this it true,
it is quite possibly irrelevent.

That's a rather lame statement to make.  All supposition, no evidence
 => no argument.

[I'll forgo discussion of the bogus `statistically likely'
argument]

Maybe I don't understand the meaning of the words 'statistically'
or 'likely'.  In my mind, the first implies using historic data to
analyse patterns; the second implies a greater than half probability of
an event occuring.  The chances of the software you are running
containing bugs is, therefore, statistically likely.

I'm not convinced that it is worth my while to be arguing the -firewall-
issues here;  I'm overwhelmingly convinced that it isn't worthwhile to
argue statistics here.  In any case, the point is irrelevent to the
main argument, and to that end I've stipulated it.

My point was that the software you run is likely to contain bugs.  You
claim this is 'bogus' without any argument.  Looking at development
cycles or the release of 'patches' following product releases should
convince you that almost all software contains bugs.  It would be
foolish to think otherwise.

Suffice it to say, then, that I call the argument `bogus' because
attempting
to reduce something as complex as the security of a computer
system/network
to a single summary statistic can only yield meaningless results in
this context.  Also, such a statistic does not necessarily translate
into a probability (as seems to be implied[1]).

Maybe you did not understand my original statement.  I was not reducing
computer security to a single statisitc, I was merely pointing out that
software is likely to contain bugs, that testing helps find bugs, and that
targetted testing and prolonged testing tends to find more bugs of the
targetted nature.  Estimations of probability can be drawn from historical
data.

Yes, bad software gets written.  Let's, just
for grins, posit that the latest bind (which is source available) and
the lastest squid (also source available)[-] are more buggy
than your average firewall product (which probably isn't going to be
source available)[-], and work (below) on this hypothesis.

And the point of the above paragraph?  Making the source
available does not make software any more likely to work.  It is all to
do
with the evironment within which the software was developed and the
testing regime.

The guilty flee when no man persueth.  I stipulated the point.

What does the above line mean?  I'm not in the business to flame, but I feel
that Stephen P. Berry is not putting up any argument to justify his views.
Here is a dictionary definition of stipulate:

stipulate1 // v.tr. 
1 demand or specify as part of a bargain or agreement.
2 (foll. by for) mention or insist upon as an essential condition.
3 (as stipulated adj.) laid down in the terms of an agreement.

You're right in noticing that I did throw in a heap of weasel words to
indicate that I'm not entirely happy stipulating it, but it's irrelevent
to the main subject of discussion, viz the necessity of firewalls.

So let's try sticking to the argument.

As extra credit, what is the failure mode in the scenario you
present, how do you control it, and how does this compare to
the firewall-less scenario?

What you are doing is shifting your trust from the operating
systems of your working machines to the firewall that was built purely
to implement a security policy.  It is likely, then, that the
firewall will be more resiliant to attacks than your working machines.

This isn't an answer to the question(s) I ask above.

Again, your point being?

In pointing out that the response wasn't an answer to the question,
or in asking the question in the first place?

You seem intent on discussing semmantics and not on backing up your
arguments for not needing firewalls in certain circumstances.

I get the impression that many firewall absolutists (in this discussion
and elsewhere) haven't thought about what exactly the firewall is
supposed to be doing and how it is supposed to be doing it.  As near
as I can make out, there appear to be lots of warm fuzzies surrounding
firewalls and lots of FUD around application-level daemons (me, I
tend to have fear, uncertainty and doubt about just about -everything-
in a security topology.  but then again, I'm a security fatalist).
Running bind 8.2 on an (wait for it) appropriately secured OpenBSD box
gets translated into `running probably flawed services directly to the
internet where every hacker can attack them'.  A firewall, I am told,
is somehow intrinsically better due to the presumed good intentions on
the part of the developers.

Yep.  I'll stand by those comments.

This is just wishful thinking.  Looking at the example of the machine
running named(8), the only interesting conceptual point to be made[2]
is in the case where the firewall in question is an application
gateway.  An interesting case can be made for independent
reimplimentations
of application-level protocols as a security mechanism.  To the best
of my knowledge, this aspect hasn't even been alluded to in this
discussion by anyone other than myself.

Let's see, reimplimentations of protocols.  Sounds like focused (or
targetted) development concentrating on security.  Sounds good,
but who's going to do it?  A lot of people are implementing security
today and tomorrow; not in six months' time.  (And in sixth months'
time, will this have been completed?  I think not.)

I'd be interested in seeing the firewall absolutists attempt to make
a case for the necessity of firewalls based on specifics of intrusion
prevention, containment, failure mode control, u.s.w., rather than
(as has been generally done to this point) vigorous assertion.

I've put up a simple argument for why firewalls are better than not-
firewalls.  Until this is shown to be incorrect, I do not see the point.

That is, unless Mr Ranum kills the thread.  Which might not be a bad
idea from the looks of it---it doesn't appear that either side of the
discussion appears to be getting much out of it at this point.  Which
is a pity.

Due, mainly, to your decision to not provide any argument or evidence.

At any rate, I'm not sure that professed intent in a design can
be assumed to be translated into functionality in the implementation.
In fact, I'm pretty sure it can't be.  Anyone familiar with the
historical record will be able to supply counterexamples to your
argument.

The above paragraph offers an opinion with no evidence or
argument to back it up.  If you wish to be taken seriously, will you
please
provide the 'counterexamples' (sic) or explain your meaning of 'the
historical record'?

Erm...are you seriously contesting my contention that things don't
always work the way they were designed to work?  Pardon me if I observe
that this strikes me as slightly disingenuous.

No, I am pointing out that you are not providing any evidence.  Phrases like
'I'm not sure', 'I'm pretty sure', and 'anyone familiar' do not provide
weight
to your case.


The difference being that application-level daemons are not
primarily built to implement a security solution. 

Obviously this depends on the application-level daemon.

Agreed.

Been there - I'm interested in your definition of 'secure'
software now.
Oh, my mistake, chroot forms an E3 gateway between your
service and the rest of the operating system... hmmm... Or maybe
you are just blindly trusting it?

No.

I said that servers exposed to the outside world should be `secured
appropriately'.  You seemed (and, indeed, seem) to find the idea somewhat
ludicrous.  I was merely explaining what I meant.  Part of what
I mean when I say that a server should be `secured appropriately'
is that it should, if possible, be running exposed application-level
daemons in a chroot.

I find the word 'appropriately' ludicrous.  You have not stated anyway of
distinguishing between appropriately secured and inappropriately secured
(or appropriately insecured).  I was pointing out that chroot may not really
be providing the security that you believe it is.  Simple checklist: has the
particular implementation been certified (ie to E3)?  is it running on an
operating system that has been certified?  If not, where does your faith
in chroot come from?

Again you appear to misunderstand.  Consider the following:
i) I decide to use your non-firewall method.  I expend an amount of
energy testing the daemons that I am exposing to the outside world.
I find a number of bugs (this is assuming that you are willing to expend
that level of energy.)  I can either, a) contact the vendor and
wait for a patch before iterating the testing; or b) using open source
products try and fix it myself before iterating the testing.  Both of
these appear to be time-consuming with a slow
(bug-wait-patch-bug-wait-patch)
path towards a 'secure' piece of software.  or
ii) I decide to use a commercial firewall.  A large (several times if not
several orders greater) amount of effort has been expended by the
company who build it than I expend in i above.  The result is a
better tested and already fixed system.  How much further do I need to
go to explain the metrics of software correctness?

This is obviously just more question-begging.  Igoring the hidden
contention that (for example) bind, sendmail and apache are less well
analysed
than (for example) Firewall-1 (unless your argument holds some caveats
which weren't apparent to me), this doesn't address the question of
what functionality you're expecting to get out of the firewall.

As I noted previously, if you're just doing packet filtering, even if
we stipulate that your firewall is entirely bug-free, this doesn't imply
that it's doing much of anything as far as protecting your
application-level
daemons from being exploited.

If you're running an application gateway then you're assuming that
your proxy is better than the daemon itself.  As I said, whether or
not this is a good assumption will depend on the application and the
proxy in question.  You seem to be asserting that this will -always-
be true, as a result of some belief about the rigour of firewall design
and implimentation.  I do not find this a particularly compelling
argument.

I am stating that the proxy should be more secure than the daemon
itself due to the design, implementation, and testing processes.  Obviously
there will be cases where this is not true.  

Beyond that, it's not even clear that even if you're running all inbound
data through a proxy that it'll help the daemon any in the case of an
attempted exploit.

I assume that the proxies should do protocol analysis which should
distinguish
(generally) between attempted exploits and genuine requests.

You can
implement sane security policies without the use of firewalls.

You can, but it is all down to amounts of effort versus
results.

I read this as a concession of the point.

It was more along the lines of 'I could write a word processor, but I'd
rather
use a commercially available one.'

If I was connecting a system to the internet,
I'd go for a firewall any day.

As, generally, would I.  I do not make the mistake of contruing this
act of convienience and (to some extent) laziness to be an indication
of necessity.

I also, incidentally, think that the number of machines that can talk to
the internet (and _vice versa_) is much higher than it can or should be.  
There are comparitively few cases in which dedicated production machines
(for
example) actually need (or should be able) to exchange data with external
hosts.

This appears to be a good point to leave it, then. 

Kev

Kevin Sheldrake
CCIS Prototypes and Demonstrations
British Aerospace Defence Systems
[+44 | 0] 1202 408035, kevin.sheldrake () baedsl co uk




Current thread: