Educause Security Discussion mailing list archives

Re: Philosophy of DMZ - Summary and direction change: Reverse proxy?


From: Alan Amesbury <amesbury () OITSEC UMN EDU>
Date: Wed, 20 Apr 2005 13:23:31 -0500

Barros, Jacob wrote:

Thanks for all the responses. If I could summarize the comments so far,
it sounds like everyone is saying to find a way to keep the DMZ and
secure the inside. I apologize for not describing our network's config
in detail.. Just trying to keep the posts succinct.


I tend to look at it as less of a "DMZ" (which I've found typically
indicates a border between "good guys" and "bad guys") and more as a
form of zoning (systems on each side of a zone boundary have largely
unfettered access to each other, while access across a boundary is more
tightly controlled).  I agree with the implied statement from Tom
Davis:  There are many definitions for "DMZ."  However, the term
"border" seems (to me, anyway) less ambiguous, so that is the term I'll
try to stick with here.  A DMZ approach also lends itself to the
hard-crunchy-shell-with-soft-chewy-center security model, which is
probably not the best approach for an organization as internally diverse
as many .edu's are.  Use of internal borders makes a lot of sense in spots.

Our DMZ is the latter of the two models mentioned by Tom. It is behind
our firewall. However as Michael mentioned, there is the task of
opening ports for internal users or services to access DMZ resources. A
long-term concern we have with this model is that the more servers we
put in the DMZ, the greater the load we put on our pix. For example,
the solution that prompted this thread will be primarily used on campus.
If I could guess a figure, probably 80% of it's usage will be from
internal users. That figure is common for all of our servers (primary
website and email gateway excluded) as we are currently geared more
toward on-campus students.


Although I've not much direct experience with the PIX or the FWSM (its
blade-based sibling), I've some experience with packet-filtering
firewalls (stateful and non-stateful), mostly in the form of things like
PF, IPFilter, ipfw, iptables, and ipchains, but some in the form of
Raptor (in its non-proxying mode) and CheckPoint.  When it comes to
passing traffic, packet filtering firewalls tend to do it pretty easily
and efficiently.  Newer ones typically have a pretty highly optimized
decision tree, too, so they're not as easy to overload as older
filtering technologies (like older routers that had rudimentary,
non-stateful packet filtering capabilities sort of bolted on as an
afterthought).  So, unless you're filtering a really large pipe, i.e.,
you're running the PIX at 1Gb or slower and your pipe's not saturated, I
suspect the PIX *may* do OK.  (I'd still suggest checking its diagnostic
and logging output, though, as it can't hurt to be sure.)

Speed can also be largely maintained if your rulesets are kept
reasonably simple.  This may take considerably planning at the outset,
but such planning can ultimately pay for itself down the line.  For
example, one side effect of having a well-planned security model is that
it can help make maintenance costs (like modifications to accomodate new
types of traffic) easier.  Besides, the auditors like documentation.  :-)

No one eluded to the concept of proxying info to external users. Is
anyone doing it? My assumption was that the fewer the 'holes' in the
firewall, the better performance and less risk. In my mind it makes the
most sense to have a few proxy servers in the DMZ answering all external
requests for internal resources, but no one seems to be doing it. Is my
assumption wrong? Am I barking up the wrong tree?


Proxying is a different animal.  I've got a fair amount of experience
with proxying firewalls as well (lots of older Raptor, but a fair amount
of other stuff as well), and have observed that proxying can *really*
slow things down.  Note that I'm talking about proxying at at least
layer four (TCP, UDP, etc.), and sometimes at a higher level (layer 5-7
on the OSI model).  In simplest terms, it is much, much easier to look
at a packet's header information (specifically the IP, TCP, UDP, etc.,
header information) and make a pass/block decision than it is to proxy.
There's a couple reasons for this.

First, a full proxy firewall will function as an endpoint for the
original connection, then establish a second, independent connection to
the intended destination.  This means that either a) the client
initiating a connection is targeting an IP on the firewall, or b) the
firewall is intercepting packets for the intended target.  In either
case, a true proxy doesn't pass packets through in their original form,
but copies the data out of it, wraps it in a brand new packet, then
sends that on its way.  Since we're talking about proxying at layer four
or above, the proxy may also need to wait for data to arrive as it
reassembles it.  For example, a large TCP packet ( say, 60K) sent by the
original application will get fragmented across multiple IP datagrams
while in flight; the proxy needs to wait for all the datagrams to arrive
in order to reassemble the original TCP packet before it can copy that
data to the other half of the proxied connection.  A proxying *firewall*
does all this, but adds in ruleset capabilities as well.  All this takes
time, although not necessarily a lot of CPU.

Second, a proxying firewall doing deep inspection of payloads has to
reassemble the original payload in order to inspect it.  If it's only
proxying at layer four (TCP, UDP, etc.), then it's already doing the
reassembly work as part of proxying.  However, if it's aware of
upper-layer protocols, then it will take more time (and resources) for
it to reconstruct the upper layer information, evaluate it, and act on it.

As for your question of whether proxying is actually being used.....  I
know of a large company that makes extensive use of proxying firewalls.
The fact that they make use of full, protocol-aware proxies has
demonstrably protected them from a number of vulnerabilities.
Stack-smashing attacks (Land, Ping of Death, etc.) simply can't get
through a true proxy (but woe to the firewall that has a vulnerable
stack of its own!).  The same is largely true of buffer overflow attacks
against higher protocol software (think Apache, Sendmail, etc.),
although a proxy might still pass an attack that's protocol-compliant.
Unfortunately, while their security stance has been historically strong,
they also happen to be really, *REALLY* bad at monitoring their resource
usage ("Capacity planning?  What's that?"), and have been (unpleasantly)
surprised to find that their firewalls are sweatin' hard to keep up with
load.  Last I'd heard they were trying to jettison their proxying
firewalls in favor of faster (and arguably less secure) packet-filtering
technology like PIX.

Although I believe that many organizations can benefit from the use of a
full proxy (particularly one that's protocol-aware) in high-security
applications, I suspect organizations that place a high priority on
legitimate, high-bandwidth traffic getting passed quickly (like .edu's)
will probably make very limited use of full proxies, choosing instead to
augment stateful packet filtering firewalls with other defensive
technologies (e.g., IDS, IPS, etc.).  I'd expect proxy firewall
deployments to be pretty limited.

Sorry for the novella-length response, but I hope it's useful info!


--
Alan Amesbury
University of Minnesota

**********
Participation and subscription information for this EDUCAUSE Discussion Group discussion list can be found at 
http://www.educause.edu/groups/.

Current thread: