Firewall Wizards mailing list archives

Re: firewalls and the incoming traffic problem


From: Darren Reed <darrenr () cyber com au>
Date: Wed, 1 Oct 1997 10:53:03 +1000 (EST)

In some mail I received from Bennett Todd, sie wrote

On Mon, Sep 29, 1997 at 12:06:50PM +1000, Darren Reed wrote:
It would seem that the "ultimate" firewall is one in which you can safely
and accurately emulate the backend handling of some data, observe what
happens as a result of that handling and then decide what to do with it.

I dunno; I'm not sure that's implementable in practice, and I am sure that it
would leave us with the same problem we have now, namely trying to keep up
with the cleverness of potential attackers.

I agree about the implementation problems but the idea is not to try keep
up with the cleverness of attackers but to confine things to be what is
allowed.  ie. extend the security policy of "deny all but what you explicitly
allow" from your network rules for the firewall to the applications which run
on it.  For example, if I write a program to watch sendmail or mail.local
which knows that no other programs should be executed and neither should
data be written anywhere but to a spool directory, it shouldn't matter
whether the attacker is using an old or new method to "break out" because
we're not trying to cover the "methods" - only the result.  Running things
in a chroot'd environment achieves a similar result: if in delivering
mail something tries to run /bin/sh and I've chroot'd to /var/spool and
there is no /bin/sh, the delivery fails and whilst we're not protecting
ourselves from any specific mail handling problems, we're confining the
delivery of mail to what we allow.  Now there maybe a case where you want
/bin/sh to be run by a .forward or something so testing the delivery of
mail in a chroot'd environment is not perfect.

However, I'm all but resigned to admitting defeat on the trojan horse
front (be it for viruses or other) with transferring programs across the
'net as uuencoded bits or otherwise.  There are just too many possible
covert channels (and depths of these) with possibly unlimited complexity
to expect a 100% result.

[...]
The idea is to ``tag'' data with a security level, and provide a mechanism for
guaranteeing that such tagged data isn't allowed where it shouldn't be. In
particular, I envision OSes including support for this extended to a fairly
strong networking underpinnings (perhaps using security features like the
recent IP work). As a for instance, you could run the latest steaming heap
of bits from Netscape or Microsoft, and you'd naturally install them with an
explicit trust level of Zero, or perhaps Negative:-). They could interact
with the internet, but would basically lie in a highly restricted box; only
restricted, tightly controlled interactions would be allowed with anything
else inside the security perimeter.

What about `buffer overruns' of internal program variables causing it to do
unexpected things ?  I mention these because quite possibly you have storage
space for variables of the same "tag type" next to each other and this sort
of protection buys you little or nothing.  I'm not quite sure how this will
stop badly written programs from doing things which they're not meant to do
but can do.

Darren



Current thread: