Firewall Wizards mailing list archives

Re: Vulnerability Response (was: BGP TCP RST Attacks)


From: Devdas Bhagat <devdas () dvb homelinux org>
Date: Fri, 28 May 2004 01:04:52 +0530

On 27/05/04 13:07 -0400, Marcus J. Ranum wrote:
Dave Piscitello wrote:
But don't you think you can manage risk better if you mitigate by central
policy definition and patch management? 

I don't think patch management is the solution for any significant aspect
of the problem. I know that flies in the face of the "common wisdom" of
security these days, but I think eventually time will tell and we'll give up
on patch management as a security technique. The only place patches
Patch management is a crucial part of the defense in depth concept.
Make your network like a rock, not a nut is how I might even phrase it.

Relying only on patch management as a security technique is a bad idea.
(It works for a single box not providing any services to the Internet,
or only a very restricted set of services. For anything more complicated
than that, I will not rely purely on any one single solution for security. ).

make a difference is on services that are internet-facing or mission critical.
That is where they make an immediate difference. However, your browser
is facing the Internet all the time. The days when the browser was a
simple application that would parse a specific subset of HTML tags are
long gone. Today we deal with Javascript, ActiveX and other functional
enhancements.

What you'll find is, if you can define those systems and services, you'll
have good security if the list is small and the machines are well configured.
That second point has a problem. A very big problem, which we are
discussing right now :).

If the list is large (the "have cake, eat it too, not get fat" philosophy of
Internet security) you'll have cruddy security no matter what you do
Or a small list with non well configured machines.

Put differently, I see the "patch it everyplace" approach as an over-extension
of an approach that *did* work OK: policy-centric host hardening. The idea
It still does. You have to do it right. Every host involved needs to be
treated like a bastion host. If that idea is not the base for your
patching, then you are doing it wrong.

was that we could harden certain crucial hosts and they'd be "safe enough"
for Internet use. So people went and extended that philosophy to "harden
everything" - i.e.: patch it everyplace. The problem is that hardening hosts
only works when you are working with underlying software that CAN BE
hardened and host operating systems that CAN BE secured. We were
This is a crucial point. One that is usually missed. I am sure there are
plenty of people here who can lock down a Windows system.

comfortable with building - and were able to build - very strong bastion
hosts back in the early 90's. So people looked and said, "behold! if
we just patch everything, then we won't NEED a proxy firewall! after
all, we'll be as SECURE as a locked down proxy host!"    Ummm....
Wrong. Given a few more years this reality will sink in.
They would be right if their applications were built the same way the
application level proxies were.

The "right way" is still the right way and has been all along. It's:
        - minimize your access footprint (reduce zone of risk)
        - default deny
        - identify the few Internet services you need to the few servers
that need
                them, and lock those services down
                * keep those services patched if you can't lock them down with
                better means like chroot, setuid, and file permissions
Do both! chroot merely increases the time you have in hand to respond to a
vulnerability. That you are running chrooted is no excuse to stay
unpatched.

        - audit and look at your logs
Yet another point missed all too often.
<snip>
administration.  Organizations with high clue in their
IT department will centralize administration because they
know users can't be trusted.
A good administration will also talk to their IT department, and let
them know of the business needs of the organisation. Usually the
requirement is not to implement a product^Wsolution but to do a certain
task.
Once these needs are defined, then the IT department can come up with a
list of possible solutions, with implementation requirements, support
requirements and security risks. Given that data, management can
determine which specific solution to implement.

Often enough, it is not that users can't be trusted, but that they lack
sufficient information to make a good judgement on the implications fo
their actions. (This is different from the case where the users can't be
trusted, and checks and balances need to be built into the system to
catch such users).

(Funnily, all the people with clue that I personally know will always
build an Internet facing server by stripping it down to bare essentials
and then adding necessary services. The people without clue tend to
install everything and then stop services, but not uninstall them. Quite
a few of them follow that same policy for their own desktops as well).

Devdas Bhagat
_______________________________________________
firewall-wizards mailing list
firewall-wizards () honor icsalabs com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards


Current thread: