Firewall Wizards mailing list archives

Re: Defense in Depth to the Desktop


From: Chris Pugrud <chris () pugrud net>
Date: Mon, 13 Dec 2004 16:41:56 -0800 (PST)


--- "Paul D. Robertson" <paul () compuwar net> wrote:
[lots of stuff snipped because we seem to agree more than disagree]

I'm dealing with one of those right now too, severely over-done hardware,
over-complex network and multi-homed everything (Windows boxes too, WINS
doesn't like that!)

I once worked with an organization that triple homed all production servers. 
The servers were segmented for security into seperate subnets.  Customers on
one side, administration on the other side.  They then had to tackle backup and
decided to triple home the machines onto a _flat_ network to "make backups
faster".  People, usually in a hurry, reach for the easiest solution, something
they know, and something they think will solve the problem at hand.

Most organizational networks grow organically, in response to the rush project
of the week, and they usually do not have a steady hand guiding them that is
more concerned about long term architecture and growth issues, so they wind up
with obtuse systems that could be better diagrammed by vines of ivy.  Every
once in a while it becomes our reponsibility to go in there, clean them up, and
set them on a sane path for long term growth.  We almost never get to scorch
the earth, that would be too easy and we wouldn't be justified.  We have to
unravel their systems and reorganize them as best we can.

Hype/Budget/Funding scale).  I was just searching for something that is a
bit
easier to explain and apply to the people that write the checks and is more
effective than sacrificing rubber chickens at dawn (err, I mean nifty do
buzzword compliant gimmick of the week).

Sure, but I really guess I'm not seeing this as a solution for a firewall
more than it's a solution for an internal network architecture...

That would be my fault.  In the rush to explain this as succinctly as possible
initially I made a few too many assumptions and left out some critical details.
 What I am proposing is an internal network architecture that compliments the
perimeter firewalls, application, host, and O/S security.  I'm trying to add
another tool to our toolbelt for "defense in depth", I'm not trying to replace
anything or claim that it's the greatest thing since sliced butter.  I am
saying that it is effective for implementing stronger internal security
controls using mostly existing equipment and technology that is very effective
at stopping a limited range of attacks when our other defenses fail.

But configuration things, especially those which fit in a .reg file that
can go into a login profile are much more likely to be adopted than "buy
another firewall" things, no?

Do both.  That's my overeducated understanding of defense in depth.  Apply
several tools at successive layers using varying technologies, techniques, and
vendors so that when more than one of them fails at least on of them will still
stop the attack.

Again, if the data's on the server, and the data's at risk, I'd rather not
see it happen.

The data on the server is at risk regardless.  What I'm focusing on is a
technique which would help prevent a worm from shutting down an entire
organization.  When the admins are less concerned about the sanctity of every
stinking desktop they have more time and money to focus on best protecting the
servers to prevent such an attack.

How many hundred times a day is a user going to "click" to access the
organizational file server, email server, and porn (err proxy) server
before
they just enable some dangerously broad default allows? I recognize some
value
for personal firewalls, but I think that using personal firewalls,
especially
on deskbound organizational systems, puts us on the wrong side the tail
chasing
treadmill.  You are talking about a lot of money and management and
application
management and helpdesk headaches that could be much more easily and
cheaply
and sanely managed at the core router/firewall/rubber chicken substitute.

Hmm, the user can't enable it in the "enterprise" versions, it's a company
or group-wide policy thing.  I'm not sure it's a lot of support costs, and
given the amount of spyware and Trojans I've found recently- I'd say it's
almost a necessity.  Firewalls don't stop this stuff without content
inspection and knowing what's bad- or disabling active content, and these
days, that's a difficult to win battle :(

PFWs seem to me to be a pretty good stop-gap.  The ability to get back
some control over the desktop is worth its weight in gold- losing that
ground is what made the war swing against us!

Is this really an improvement?  This is where I can't help but play devil's
advocate.  Are we really better off when our security is dependent on hundreds
or thousands of desktops (the weakest link) that we fight desperately to
control in a never ending futile battle?  One of the first tenets of systems
security is physical security and you can never claim that you have physical
control over a machine at your user's fingertips.

What's wrong with a model that acknowledges that while we will do our best to
protect the security of user machines, they are a resource we can not
ultimately control, so rather than making the security of the entire
organization dependent on them, we are going to reduce our effective security
perimeter to a known subset of systems that we do maintain absolute physical
control over?  I'm not suggesting that we abandon user machines, I'm suggesting
that we remove them from being available to be the weakest link in the security
of the organization.  I'm suggesting that we acknowledge that desktops are
going to get hacked and infected (especially laptops) and make a concerned
effort to protect the rest of the organization from that inevitable compromise.

Sure, that a risk that's easier to accept, to know that one system will be
vulnerable and you can focus your energy on getting that one system updated
and
protected.  That's a bit more difficult to do if your attention is
"focused" on
a few thousand desktops in various states of patch/AV/user disfigurement.

You're still going to have to deal with the desktops, because the users
are going to have to work and have critical files there.  I think that I'm
probably more worried about spyware Trojans than worms right now- worm
events get lots of press, but the infestations are really ugly.

I'm not abandoning the desktops, I'm trying to minimize the potential of one
infected desktop infecting all of the desktops.  One machine is easier to clean
than hundreds, or thousands.  I'm also addressing the critical files issue.  If
I was an insider trying the steal juicy data I'm going to attack the desktops
and laptops of the people that have that data directly.  It will be a lot
easier and more discreet than attacking the fortified, guarded, and watched
servers.

But then you've got a single point of failure, and just using a
255.255.255.255 subnet mask and a static route seems to be not that messy
to me.  Plus it works no matter what vendor's gear you happen to hit-
that's always a bonus to me because the "switch just went down and we need
to put in whatever we can" scenario with little sleep needs to not carry a
bunch of administrative overhead.

I'm not discounting this approach, I just need to noodle it some more to
understand all of the implications.  Do you have any references to this being
applied and used?

Thank you,

chris

_______________________________________________
firewall-wizards mailing list
firewall-wizards () honor icsalabs com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards


Current thread: