Educause Security Discussion mailing list archives

Re: Rethinking the DMZ


From: Mike Caudill <mike.caudill () DUKE EDU>
Date: Thu, 6 Sep 2012 19:04:49 +0000

Using 1918 addresses internally with a default-deny policy eliminates the knocking potential to everything except your 
static-NAT'ed servers.  That's a whale of a risk exposure reduction with a simple perimeter firewall :)

Only so long as you have your garlic model.  Much of the malware distribution today is a come and fetch it model.  Even 
though that attacks occur constantly on HTTP, SSH, RDP and other ports, many forms of malware now get on hosts without 
a malicious machine in a foreign country trying to break in with a drive by attack. Your suggested approach of 
compartmentalization for internal hosts is more important than just using RFC1918 addresses and believing that private 
addressing alone buys you large increases in security.  All that RFC1918 address does is require a different 
exploitation avenue or an additional step for the attacker.

-Mike-


Mike Caudill
Assistant Director, Cyber Defense and Response
Duke Medicine
Email:  mike.caudill () duke edu<mailto:mike.caudill () duke edu>
Phone: +1-919-668-2144 / +1 919-522-4931 (cell)


From: Jeff Kell <jeff-kell () utc edu<mailto:jeff-kell () utc edu>>
Date: Thursday, September 6, 2012 2:51 PM
To: The EDUCAUSE Security Constituent Group Listserv <SECURITY () LISTSERV EDUCAUSE EDU<mailto:SECURITY () LISTSERV 
EDUCAUSE EDU>>
Cc: Mike Caudill <mike.caudill () dm duke edu<mailto:mike.caudill () dm duke edu>>
Subject: Re: [SECURITY] Rethinking the DMZ

On 9/6/2012 2:24 PM, Mike Caudill wrote:
Hi Ena,

The problem with the concentric circles approach is that once you get past the firewall, without other layered security 
protections one "trusted" host can easily attack and infect another "trusted" host.  And if you look at the statistics 
on what AV software is actually able to catch, it does not even come close to being 100% effective.  A perimeter 
firewall can perform some useful functions, but can also introduce problems as well.

That's the classic "onion" model.  I prefer the "garlic" model...  separate layered cloves of application areas with 
their own common core, wrapped around a common infrastructure.  We do this internally with VRFs (minimizes the 
collateral damage of a single compromised host to the container).

All my instincts tell me that enterprise borders are less helpful, and that I want our focus to be on placing 
well-designed protection very close to the resources (data, app servers) we want to protect and to treat all else as 
public and untrusted, even if a device happens to have an IP address at the moment that "belongs" to the University.

http://www.internetworldstats.com/stats.htm says the Dec 31 2011 internet user population was 2,267,233,742.  Should 
they all have access to your front door?  Can they all try the lock?


I'm a fan of open networks, closed servers, protected sessions.

Using 1918 addresses internally with a default-deny policy eliminates the knocking potential to everything except your 
static-NAT'ed servers.  That's a whale of a risk exposure reduction with a simple perimeter firewall :)

Now port-restrict the openings at the perimeter, or as discussed earlier, run the public-facing side through an 
F5/load-balancer/firewall to get to a further closed back-end for even more reduction.

Or join the other lemmings and throw it all up in the cloud and let someone else worry about it :)  Just be sure to 
duck the auditors :)

Jeff

Current thread: