Firewall Wizards mailing list archives

Re: The value of detecting neutralized threats. (was RE: IDS bla


From: "Stephen P. Berry" <spb () meshuga incyte com>
Date: Mon, 01 Feb 1999 10:59:17 -0800

-----BEGIN PGP SIGNED MESSAGE-----


Having listened to the debate on the value of external or DMZ based IDS, I
was struck by the fact that no one (to my knowledge) has pointed out the
traffic that the external IDS will not catch, but that the internal one
will.  Namely, attacks that originate from the internal, trusted network.
We all know that a large amount of unauthorized access comes from the
inside, so shouldn't this play a role?
If 50% (or whatever) of the attacks come from the inside, that makes the
external IDS useless in detecting half of the attempts.  Surely this must
play a role in deciding how much time/money is spent on external IDS, right?

I think the `inside'/`outside' dichotomy is a frequently valuable but
nevertheless misleading fiction.

Whenever there is an interface between network segments with different
associated risk levels, intrinsic values, or whatever, there should be:

   -A security policy which reflects the differences between the segments
   -A mechanism for enforcing this policy
   -A mechanism for monitoring the enforcement of the policy

To do otherwise is to rely on voodoo and wishful thinking.

For many internal points of transition, the apparatus needed to
adequately enforce and monitor policy can be quite minimal---i.e., a
firewall and a couple ID sensors would be gross overkill.  It is
even possible that, for some folks, wishful thinking may be sufficient.
This is almost certainly believed true more often than it is true.
Despite the impression one might get from listening to network security
naysayers in general and IDS opponents in particular, overinvestment
in security is almost certainly not one of the major problems facing
the internet or the organisations that use it.


Can someone comment on the relative difficulty of detecting internal
attacks?  I would imagine in some ways it must be more difficult (more
subtle break-ins), yet easier in other ways (tracking down the individual).

Two of the main difficulties associated with internal IDS operation (at
least in my experience) are the associated bandwidth, and determining what
the interesting anomalies[1] are.  In brief:

        -Bandwidth.  For most folks, there's going to be significantly
         more bandwidth to points on the near side of the firewall (or border
         router) than there is to the far side.  Usually by at least
         an order of magnitude.  You can probably do the math yourself.
        -Anomalies.  Is it a misconfigured {mail client/news reader/web
         browser}, or is it a probe?  Is that an DoS attack, or is somebody's
         damn laptop just trying to find a printer?  U.s.w.
         You'll (hopefully) be faced with more of these sorts of questions
         when looking at the traffic between internal hosts than
         when looking at the packets (hopefully) bouncing off your firewall[2].

Of course distinguishing between an `interesting' anomaly and
an `uninteresting' one is second in difficultly only to determining what
is an anomaly in the first place.  And this is almost always going to be
true regardless of where your sensor is[3].

One way to make it easier is an active campaign of counterintelligence.
A twink who might be very difficult to percieve as a threat while just
sniffing around or turning doorknobs is likely to be very easy to peg
once it attempts an overt act...particularly if it is operating on
faulty information.

Here's an example I often use[4]:

A couple years ago I noticed that one machine I was administering was
getting hit by -huge- numbers of attempts to exploit a then-popular
known vulnerability.  The circulated exploit scripts attempted to
nab the passwd file of the attacked machine.  So I decided to
start responding to every attempt to grab the passwd file with a
bogus passwd file...with a crackable root passwd.  Sure enough,
I'd start seeing attempts to exploit the vulnerability followed a couple
hours later by attempts to telnet into the machine in question.  So
far, what I'm doing at my end is pretty routine.  It does supply
some useful information, in that only a comparitively small fraction
of the folks who get the bogus passwd file subsequently attempt
to log in...so it's a filter for weeding out the kids who are
just cutting and pasting an exploit to see what happens.

What I discovered was that if instead of just dropping the subsequent
attempt to connect via telnet (or whatever) I responded with a
tcpdump(8)-ish `Connection from [foo.com] not allowed', where foo.com
was the domain (but not the full host name) of the connecting host,
I would frequently see, in the next couple minutes, connect attempts
from bar.com, baz.net, and typically a half dozen or more others.

The moral of the story?  The bad guys will offer lots of information
about themselves if you give 'em half a chance.  The above example
didn't involve anything resembling what is traditionally meant by
IDSes, but I'm sure you can see where the same principles can be
applied.

Corollary moral:  Information you learn from analysing failed exploit attempts
can tell you more than that an attempt was made and it failed.  Traffic
that looks innocuous when you know nothing about the source might look
considerably less innocuous if you discover the same source has been
bouncing other traffic off your firewall.








- -Steve

- -----
1     Brief sermon:  When I see the phrase `intrusion detection system' used,
      it generally appears that what the speaker actually means to say 
      is `system for anomaly detection using network traffic analysis' 
      or something similar. Most contemporary IDS systems (at least
      those with which I have some experience) can do a hell of a lot more
      than detect intrusions, and in most cases the actual detection
      of intrusions can be better done by other methods.
2     If your internal network is well segmented, this can probably be
      minimised.  Happily, the machines that tend to generate the most
      false positives also tend to be the machines that an administrator
      cares least about (i.e., desktops rather than production servers),
      so isolating machines of different functions, by enforced policy,
      is generally not infeasible.  Your Mileage, of course, May Vary.
3     The main exception I can think of is the case of an isolated sensor
      placed so it should typically receive -no- traffic (and variations
      on this theme).
4     Because I'm no longer collecting particularly useful information
      via this method.  While security through obscurity isn't, advertising
      your counterintelligence methods on mailing lists isn't a particularly
      good idea, either.


-----BEGIN PGP SIGNATURE-----
Version: 2.6.2

iQCVAwUBNrX4Ryrw2ePTkM9BAQEKkgP/flzhHNz/VGu8CGm4ddICETvKo+3IkCv/
GwVh5nQ2nWEcT3RyM/Cc0yc88VkzDxVnl+Mi8xJ7E7l0VNy+V1Non1LKIwW58SGZ
QICMfgzUVT49j8KwVsCAG4J9BIqqzBUKRRylKBPozl143zku+dZz+ND94xehiCNU
44twnFCGIeQ=
=v7sA
-----END PGP SIGNATURE-----



Current thread: