Firewall Wizards mailing list archives

RE: Firewall RISKS


From: kevin.sheldrake () baedsl co uk
Date: Thu, 17 Jun 1999 12:05:49 +0100





Stephen P. Berry [SMTP:spb () incyte com] write:
 
In message
<609B8FA424F9D111A91A00805FD71EE2CF46EA@dslexxch3.spsxch>,
kevin.she
ldrake () baedsl co uk writes:

If we assume that we are trying to protect against hackers and
DoS attacks from the outside network (internet), we are
assuming that the
attacks are being made against flaws in the software running
the services on
the machines.
These flaws are generally accidentally introduced into the
software and the testing that is performed pre-release either
does not 
show them up or the software is released with known bugs
pending fixes.
Due to this, you are statistically likely to be running flawed
software (maybe not to the extent that they can be exploited,
but certainly to
not provide functionality exactly as
specified/required/offered.)
A firewall and packet filter will firstly limit the services
available to the outside world to the ones you wish to allow
(it is 
certainly possible that you haven't realised that rshd is
running 
until it is exploited); and secondly, it can
perform protocol analysis in order to spot and drop illegal
attacks.

Consider:  (most) firewalls are (at least partially) software.
Iterate
your argument.

When a company writes an implementation of an internet service
(such
as bind), the majority of the testing is against the requirements
of the
service.  When someone tests a firewall (however it was built)
the
majority of the testing is against possible security holes.

[I'll forgo discussion of the bogus `statistically likely'
argument]

Maybe I don't understand the meaning of the words 'statistically'
or
'likely'.  In my mind, the first implies using historic data to
analyse
patterns; the second implies a greater than half probability of
an
event occuring.  The chances of the software you are running
containing bugs is, therefore, statistically likely.

In any case:  stipulated.  Yes, bad software gets written.
Let's, just
for grins, posit that the latest bind (which is source
available) and
the lastest squid (also source available)[1] are more buggy
than your
average firewall product (which probably isn't going to be
source
available)[2], and work (below) on this hypothesis.

And the point of the above paragraph?  Making the source
available
does not make software any more likely to work.  It is all to do
with
the evironment within which the software was developed and the
testing regime.

As extra credit, what is the failure mode in the scenario
you
present, how do you control it, and how does this compare to
the
firewall-less scenario?

What you are doing is shifting your trust from the operating
systems of
your working machines to the firewall that was built purely to
implement a security policy.  It is likely, then, that the
firewall will be
more resiliant to attacks than your working machines.

This isn't an answer to the question(s) I ask above.

Again, your point being?

At any rate, I'm not sure that professed intent in a design can
be assumed
to be translated into functionality in the implementation.  In
fact,
I'm pretty sure it can't be.  Anyone familiar with the
historical record
will be able to supply counterexamples to your argument.

The above paragraph offers an opinion with no evidence or
argument to
back it up.  If you wish to be taken seriously, will you please
provide
the 'counterexamples' (sic) or explain your meaning of 'the
historical
record'?

That being said, you are correct in suggesting that having a
specific
design goal in mind can be valuable---or indeed necessary---in
implementing
a security solution.  It's not clear that this isn't just as
true for
application-level daemons as it is for firewalls---which is
what you
seem to be suggesting.

The difference being that application-level daemons are not
primarily
built to implement a security solution. 

Trivially, I'd say a workable no-firewall setup given the
above
[removed] constraints would be to simply multihome whatever
servers need to be exposed to both the internet and the
internal 
network (i.e., bind 8.x, a squid server, u.s.w.).  Secure
them
appropriately.  Have all your desktop toys talk to the
internal interfaces of these boxen.

The problem here is that you are running probably flawed
services
directly to the internet where every hacker can attack them.
I think
your basic ignorance is shown in the 'Secure them
appropriately'
statement - either that or you're having a laugh.
Maybe I am not party to this new technology that can secure
software - do you have a reference to it?

You could start with `man chroot'.

Been there - I'm interested in your definition of 'secure'
software now.
Oh, my mistake, chroot forms an E3 gateway between your
service and the rest of the operating system... hmmm... Or maybe
you are just blindly trusting it?

As far as exposing `probably flawed' services to the Outside,
it isn't
clear what you think your firewall is going to do for you.  If
you're
just doing packet filtering, chances are you're not buying
yourself anything
except (perhaps) having another IP stack between the bad guy
and your
appication[3].  If you're using an application gateway, all
you're doing
is relying on it instead of the application itself.  Whether or
not
this is a good bet[4] will depend on the application and the
proxy in
question.  In any case, this scenario is vulnerable to the same
argument
you're trying to use to recommend it.

Again you appear to misunderstand.  Consider the following:
i) I decide to use your non-firewall method.  I expend an amount
of
energy testing the daemons that I am exposing to the outside
world.
I find a number of bugs (this is assuming that you are willing to
expend
that level of energy.)  I can either, a) contact the vendor and
wait for
a patch before iterating the testing; or b) using open source
products
try and fix it myself before iterating the testing.  Both of
these appear
to be time-consuming with a slow (bug-wait-patch-bug-wait-patch)
path
towards a 'secure' piece of software.  or
ii) I decide to use a commercial firewall.  A large (several
times if not
several orders greater) amount of effort has been expended by the
company who build it than I expend in i above.  The result is a
better
tested and already fixed system.  How much further do I need to
go to explain the metrics of software correctness?

Anyway, when I say you should secure your servers
appropriately, the
process involves things like OS selection, choosing the
application,
making sure the two play well together, uninstalling everything
that
isn't necessary to the service the machine will be providing,
running
the daemon in a chroot if possible, setting up auditing,
leaving
whatever tricks and traps are appropriate, and so forth.

A lot of which (probably / should) come free with a firewall.

Let me give you a ferinstance.  Go to (one of) your DNS
server(s).  Is
telnet(1) installed?  If so, why?  Is inetd(8) running?  If so,
why?

How many people have access to the machine, and how often do
they
log in?  Do you allow remote access to the machines (as opposed
to
requiring login on the console), and if so why?  What sort of
audit
trail is left when someone logs in?  Just something left in
{u|w}tmp? 
Something sent to syslog(2)?  Ever consider modifying login(1)
to send a 
mail message whenever it's executed[5]?  How about sh(1)?  Or
ls(1)?

My point is that if you're setting up your DNS server/MTA/HTTP
proxy/whatever
by just doing a `./config;make;make install', you're probably
Wrong[6].

All good ideas but not ones that counter my arguments for
firewalls.

(I assume that should the gateway server be successfully
attacked, routing
could then be turned on allowing routed access to your
internal,
insecure machines.)

This would be true if your firewall was compromised.

Yes, but, again, the likelihood is reduced for reasons given
above.

If an attacker compromised one of the multihomed servers, they
could
try to get it to route traffic between the two segments.  If
you've
set up the machine (and the segments) appropriately, this
should be
nontrivial and is something that should leave an hell of an
audit trail.

Depending upon your system, but this is also true of firewalls.

Anyway, I'll reiterate at this point that I'm -not- suggesting
that firewalls
are useless or that they shouldn't be used or anything of the
sort.  My
contention is simply that they are not the be-all, end-all of
IT
security as appears to be generally (and devoutly) believed.
You can
implement sane security policies without the use of firewalls.

You can, but it is all down to amounts of effort versus
results.  If I was connecting a system to the internet,
I'd go for a firewall any day.

Kev
 

Kevin Sheldrake
CCIS Prototypes and Demonstrations
British Aerospace Defence Systems
[+44 | 0] 1202 408035, kevin.sheldrake () baedsl co uk



Current thread: