IDS mailing list archives

RE: IDS vs. IPS deployment feedback


From: "Palmer, Paul (ISSAtlanta)" <PPalmer () iss net>
Date: Thu, 13 Apr 2006 20:51:58 -0400

Brian,

I cannot speak for other vendors, but I suspect that many of the vendors
share much of our experience on the topic of what to block and what not
to block by default. Here are a few of the criteria:

- ISS, like every vendor, have certain QA processes that they go through
to vet their "signatures" (ISS does not like to use the term signature
as people tend to incorrectly assume that our protection is based upon
simple strings or regular expressions). However, no amount of lab
testing and trial deployments can match the feedback you will get once
your signature is widely deployed. Corporate networks are a much
stranger place than you could ever possibly imagine. Therefore, we
prefer to recommend blocking for a signature after it has been in the
field for a month or two. Although false positives are an issue, ISS has
also discovered over time that at some locations routine use of a
vulnerability has become institutionalized. You never want to interfere
with a customer's network in your default configuration, so if there is
any doubt you do not recommend blocking.

- ISS also has anomaly based signatures. You can think of these as
having a signal to noise ratio. Indeed, many require customer tuning to
be truly effective. Therefore, these tend not to be candidates for
default blocking, but, nevertheless, are quite suitable for blocking
once tuned.

- ISS is getting more and more into "behavioral signatures". This is a
slight departure from our vulnerability based signatures as these
clearly trigger on any traffic attempting to exploit a specific
vulnerability. The behavioral signatures match on consistent elements of
malware that we see repeated regardless of the vulnerability exploited.
These require a lot of tweaking to get just right (several months from
deployment to blocking). However, they can be amazingly effective
against 0-days. Over the last year, Proventia has blocked approximately
90% of all of the 0-day viruses crossing the network using these
"signatures".

- ISS also provides some policy enforcement signatures. That is, we have
signatures that can be used to block peer to peer or instant messenger
traffic for example. Since customer policies vary widely, these are not
candidates for a default blocking policy.

- ISS provides a large number of audit signatures. These are very handy
when you need to collect a lot of forensics during an incident, but
generally blocking is a bad idea with these as they trigger on normal
traffic by design.

- In some cases, signatures are disabled by default (and therefore have
no blocking) for performance reasons. ISS has only a small number of
these. However, the design of some IDS and IPS systems causes them to
degrade significantly as you enable more signatures. For these vendors,
they must also choose to "retire" some older signatures to make room for
newer ones.

Let's skip talking about where Snort may be imperfect. I really am
uncomfortable publicly bashing other vendors and I already feel like I
have strayed a bit across that line in recent messages. I think this
list is at its best when it maintains a more positive note. There is no
one "complete" product in this industry currently. For example, I know
of many customers that have both ISS and Sourcefire products.

You ask how many false negatives can get through a default IPS
configuration? This varies tremendously between products in the
industry. This is a relevant question for ISS products given their
pedigree (ISS started as an IDS company). For us, it is currently very
small, although it did not start out that way. It is now easily less
than 10% (probably less than 1%). Since ISS made the transition to IPS
products, it has been a focus for us. With each content update we add
blocking to more signatures than there are new signatures in the update.
So, our percentage blocked increases with each update. With our focus on
false positive reduction and our focus on adding blocking to the
signatures where it will do the most good, the rate spread between those
signatures that block and those that do not is easily several orders of
magnitude. I know this to be true because I receive a summary report
every morning showing the trigger rates for all signatures from a
collection of special sensors placed throughout the Internet.

Even though I believe that ISS' numbers in this category are likely the
best in the industry, I would not be surprised if all of the IPS vendors
(that is, those companies whose primary source of income is from IPS
installations and not IDS installations) can also boast very good
numbers on this metric. The nature of the business drives them to
constantly reduce this number.

Paul

-----Original Message-----
From: Basgen, Brian [mailto:bbasgen () pima edu] 
Sent: Monday, April 10, 2006 6:10 PM
To: focus-ids () securityfocus com
Subject: RE: IDS vs. IPS deployment feedback


Paul,

 Thanks for your response. I'd love to hear you qualify differences a
bit more. 

 Every IPS ships in "silver bullet" mode with a certain set of
recommended protections activated -- the understanding being that these
signatures have extremely low false positives. Yet, these IPS have a
larger signature base that, if enabled, can stop both threats and normal
traffic. Naturally, they aren't enabled because the product is, after
all, a silver bullet; like your ISS Proventia claims. ;)

 I think metrics would be interesting here -- whether numeric or
qualitative. You explained poor SMB and MSRPC parsers in snort, and that
is interesting data. While I'm interested in getting the details as to
where Snort is imperfect, I'm also interested in getting better
qualitative data on the IPS/IDS divide. How much can the IPS drop
without false positives, versus how much can an IDS detect (with, of
course, false positives). Put in another way, how many false negatives
can get through a default IPS? 

~~~~~~~~~~~~~~~~~~
Brian Basgen
IT Security Architect
Pima Community College

-----Original Message-----
From: Palmer, Paul (ISSAtlanta) [mailto:PPalmer () iss net] 
Sent: Monday, April 10, 2006 1:38 PM
To: Basgen, Brian; focus-ids () securityfocus com
Subject: RE: IDS vs. IPS deployment feedback

Brian,

I work in ISS' research department. This puts me in a somewhat unique
position to answer your question.

One example is the signature coverage for MS05-039/CVE-2005-1983. When
the vulnerability was initially announced, the SNORT community (I do not
know which exact group created these signatures) added approximately 300
different signatures to provide vulnerability-based coverage for the
vulnerability. That is to say, these were not 300 different overlapping
signatures from a variety of sources all designed to solve the same
problem. These were a single group of 300 signatures designed to work in
concert to provide protection against unknown exploits (no known
exploits existed at the time that these signatures were added.)

The fact that 300 signatures were necessary was due to weaknesses of the
SNORT engine itself (it doesn't have a proper MSRPC parser), not the
research community. Even so, judging from what is lacking in the 300
signatures, it seems extremely likely that the SNORT research community
is unaware of all of the different vectors through which the
vulnerability can be exploited since they could have easily added
coverage for these had they been aware of them. It also seems likely
that the research community is unaware of all of the evasion techniques
available via MSRPC and SMB as there are evasions for which I have never
seen SNORT signature coverage.

It is interesting to note that once a proof of concept exploit became
available, the 300 signatures disappeared and were replaced by a small
number of signatures to just provide coverage for the known proof of
concept exploits.

ISS, which has proper SMB and MSRPC parsers, needed to add only one
signature to provide vulnerability-based coverage for the buffer
overflow attack (there is another signature for a related, but different
DoS-only vector). Other vendors vary in the number of distinct
signatures they require for coverage. However, I have seen none that
come close to the ~300 fielded by SNORT.

Paul

-----Original Message-----
From: Basgen, Brian [mailto:bbasgen () pima edu]
Sent: Friday, April 07, 2006 12:28 PM
To: focus-ids () securityfocus com
Subject: RE: IDS vs. IPS deployment feedback


Andrew,

some technologies, one signature handles an entire class of
vulnerabilities. Where Snort 
needs multiple signatures for the same vulnerability, ISS can protect
against the 
vulnerability with 1 signature. TP is the same.
 
 Interesting. Can you show me an example of this? I'd like to understand
the design differences that lead the snort signature base to be as
ineffecient as you describe.

ISS, for example, does their own independent security research an has
signatures to 
protect against things that Snort people don't even know about.

 I don't understand how this differs from the Sourcefire Vulnerability
Research Team. Can you provide some details, specific examples, of where
the Sourcefire VRT has failed and the ISS research has succeeded?

~~~~~~~~~~~~~~~~~~
Brian Basgen
IT Security Architect
Pima Community College

------------------------------------------------------------------------
Test Your IDS

Is your IDS deployed correctly?
Find out quickly and easily by testing it
with real-world attacks from CORE IMPACT.
Go to http://www.securityfocus.com/sponsor/CoreSecurity_focus-ids_040708
to learn more.
------------------------------------------------------------------------


Current thread: