Firewall Wizards mailing list archives

RE: Firewall RISKS


From: kevin.sheldrake () baedsl co uk
Date: Wed, 16 Jun 1999 10:21:07 +0100

Sorry to butt in, but I think there are some basic flaws in the
argument.

 Stephen P. Berry [SMTP:spb () incyte com] writes:

Realising the above (i.e., a firewall dropped into the mix
doesn't make
the infrastructure secure), what steps would you still consider
necessary to reach a stage of acceptable security?  What steps
beyond
that would be necessary before the firewall became (in your
eyes)
superfluous, and why do you consider those steps impossible or
impractical
(in all or in an overwhelming majority of situations)?

If we assume that we are trying to protect against hackers and
DoS attacks
from the outside network (internet), we are assuming that the
attacks are
being made against flaws in the software running the services on
the machines.

These flaws are generally accidentally introduced into the
software and the
testing that is performed pre-release either does not show them
up or the
software is released with known bugs pending fixes.

Due to this, you are statistically likely to be running flawed
software (maybe
not to the extent that they can be exploited, but certainly to
not provide
functionality exactly as specified/required/offered.)

A firewall and packet filter will firstly limit the services
available to the outside
world to the ones you wish to allow (it is certainly possible
that you haven't
realised that rshd is running until it is exploited); and
secondly, it can
perform protocol analysis in order to spot and drop illegal
attacks.

As extra credit, what is the failure mode in the scenario you
present,
how do you control it, and how does this compare to the
firewall-less
scenario?

What you are doing is shifting your trust from the operating
systems of
your working machines to the firewall that was built purely to
implement
a security policy.  It is likely, then, that the firewall will be
more resiliant
to attacks than your working machines.

 

Trivially, I'd say a workable no-firewall setup given the above
[removed] constraints
would be to simply multihome whatever servers need to be
exposed to
both the internet and the internal network (i.e., bind 8.x, a
squid server,
u.s.w.).  Secure them appropriately.  Have all your desktop
toys talk to the
internal interfaces of these boxen.

The problem here is that you are running probably flawed services
directly to the
internet where every hacker can attack them.  I think your basic
ignorance is
shown in the 'Secure them appropriately' statement - either that
or you're
having a laugh.  Maybe I am not party to this new technology that
can secure
software - do you have a reference to it?  Maybe there is
technology now that
can remove bugs from programs.  Maybe formal methods can now be
automatically
proved and also be automatically converted to concrete code?  Or
maybe not...
(I assume that should the gateway server be successfully
attacked, routing
could then be turned on allowing routed access to your internal,
insecure
machines.)


Note that I'm not suggesting that this sort of solution is
optimal for
all cases.  I'm just trying to get you to see that, even
stipulating
your `real world' conditions, a viable security policy can be
implimented
without inclusion of a firewall.  And I'm not even going to go
off on
the subject of solving protocol design problems with firewalls
to do it.

The solution is flawed.  Perhaps you could explain?

Kev

Kevin Sheldrake
CCIS Prototypes and Demonstrations
British Aerospace Defence Systems
[+44 | 0] 1202 408035, kevin.sheldrake () baedsl co uk



Current thread: