Firewall Wizards mailing list archives

RE: firewalls and the incoming traffic problem


From: Dana Nowell <DanaNowell () corsof com>
Date: Mon, 29 Sep 1997 11:58:04 -0400

OK, maybe I missed something but ...

If I build a entryway that has a DMZ and I populate the DMZ with service
handlers that accept and fullfil the external requests don't I mitigate the
problem?

For example, I put a mail gateway in the DMZ.  All internal mail is sent to
the internal mail hub who knows about all the internal users and forwards
all external mail out to the external mail gateway.  The external gateway
takes all mail it receives and delivers it (assume Anti-SPAM propagation
filters).  External mail from the internal mail hub is sent over the
Internet, Internet mail is resolved via aliases on the mail gateway to be
forwarded to the internal mail hub (if necessary).  I can handle all
addressing trouble in the external mail gateway (proxy) as no mail passes
internally unless an alias is present in the aliases file (this discussion
assumes sendmail as it seems to be the common denominator, any mailer
supporting an alias mechanism can be used). The obvious problem is
maintaining the aliases file and the current patch level of the gateway.  A
possible bottleneck issue exsits and can be solved via multiple gates in
parallel.

The same paradigm exsits for www servers, say an external server that is
refreshed periodically from an internal server via FTP.  Same for ftp
servers,  and most other services.  Main issues is setting up the internal
to external mirroring.  Now all 'unsolicited' services supported at the
corporate/division/non-individual level are handled in the DMZ on
sacrificial hosts mirroring data internal hosts. A breach nets you access
to an untrusted  host in the DMZ, the same access you had before from the
attacking host (discounting use as a jump point to attack other nets).

Now for solicited access you have one or more firewalls bridging the
internal net to the DMZ (use a different firewall to handle the mirror
traffic and point all hosts in the DMZ to that more specific firewall for
any internal traffic).

Now, unsolicited traffic always terminates in the DMZ at a sacrificial
host, you break it you get squat.  Possible exception is the email host
which should only accept mail destined to external addresses or to a known
alias, minimizing exposure.  If the email host is compromised, it ONLY has
access to the internal hub on port 25 (enforced via firewall proxy/filter).
This implies a two phase breakin is required, the sacrificial mail host and
then attack the internal hub, hopefully the admin will notice :).

Add more unsolicited services, add more sacrificial hosts in the DMZ.
Issue arises if you need access to internal data in realtime, never a good
idea but sometimes required by management.  If yesterday's data is OK,
replicate to the DMZ host.

So unless realtime access is needed, why is this increase in external
services a problem (other that admin and automated tool building)?

So Marcus, what did I miss??




On Sun, 28 Sep 1997 11:32:19 +0000 Marcus J. Ranum expounded:

I'm concerned that firewall technologies are going to
reach an impasse in the next couple of years over what
I am calling the "incoming traffic problem."  Briefly, the
problem:
      - Firewalls are good at providing access control
      on return traffic that is in response to a request
      that originated behind the firewall
      - Firewalls are poor at providing access control
      on "unsolicited" incoming traffic to generic
      services that are "required" as part of being on
      the Internet
      - The number of generic services is increasing
      slowly
      - The number of implementations of the generic
      services is increasing dramatically

Let's take Email as the perfect example. If I have a mail
server behind a firewall, and I want to receive Email,
I have to allow it in to my mail server somehow. More
importantly, for Email to work the way we want it to, I
have to allow Email from virtually any site in to that
mail server. Therefore, the firewall's protection is reduced
as regards my Email service. (I'll come back to the proxy
nonproxy issue later) So, we're back to worrying about
sendmail - or are we? Nowadays there are zillions of
implementations of SMTP, on many different O/S platforms,
and it's likely that there are security holes (of one sort or
another) in many of them.

The proxy/nonproxy discussion is becoming increasingly
irrelevant, as a result. Let's assume I'm using some kind
of turbo-whomping stateful filter -- in that case I need to
worry A Whole Lot about the implementation of my Email
service. If I'm allowing the whole world to reach port
25 on my mail server, then the whole world can probe
it for bugs, and, if I'm running a buggy mailer, I have a
real problem. If I'm using a proxy firewall, the proxy
may perform some additional checks, and may block
some well-known attacks, but the problem is still there.
What if I have a proxy firewall built by a UNIX guru,
which knows about mail to: /bin/sh, but which doesn't
know about mail to: c:\autoexec.bat? Those are the
easy cases -- what about the bug in bubbmail1.2 for
Windows NT, where if you send mail addressed to
to: <admincommand: reboot> it will reboot the
machine? A little feature that crept in there...

Summary: firewalls originally offered the promise that
you could "install a firewall and not worry about your
internal security."  Now, it's clear that firewalls force
you to split your security between the firewall and host
security on all the systems to which the firewall permits
incoming  traffic.

If I'm going to have to worry about the host security
and the server side s/w on my internal systems, why
shouldn't I just use a router with gross-level filtering
to channel traffic into a few carefully configured
backend servers? The "hard part" is doing the backend
configuration anyhow!!

What's worrying me is all the folks I've seen who put
a firewall in, and believe it is going to somehow protect
the incoming traffic. :( I had a consulting gig where
the customer had a very high profile target website
behind a bunch of proxy firewalls - and for performance
the firewalls were just using plugboard proxies to copy
the data back and forth. AND the web server was a
version of NCSA with known holes. Even if they had had
a proxy server, the proxy server, if it was provided by a
vendor, would not have been able to "know" about
security holes in their locally-developed CGI scripts.
It keeps coming back to having carefully configured
backend servers -- which is expensive and requires
constant maintenance.

I'm not saying that "firewalls are dead" because this
problem has always been there and firewalls DO serve
a purpose. The fact is that sites with firewalls get broken
into less often that sites without. But - are a majority
of firewall users falsely confident?



Dana Nowell                           Voice (603) 595-7480 EXT 228
Cornerstone Software Inc.             FAX   (603) 882-7313
Work: mailto:DanaNowell () corsof com    Home: mailto:dana () nowell mv com
MIME attachments preferred, BINHEX and uuencoded acceptable.
The opinions above are free, remember you get what you pay for.  
As usual, I speak only for myself.
  



Current thread: