IDS mailing list archives
RE: IDS vs. IPS deployment feedback
From: "Palmer, Paul (ISSAtlanta)" <PPalmer () iss net>
Date: Wed, 12 Apr 2006 20:38:24 -0400
Matthew, Matthew Watchinski wrote:
You state that Snort uses 300 rules to cover one vulnerability while claiming that ISS uses 1. While this my be true, it is also
irrelevant. Andrew Plato was trying to make the point that you cannot judge the completeness of coverage based solely upon signature count, since some products use more than one signature for coverage. Brian Basgen then asked for an example in which Snort used more signatures to provide coverage than ISS did. So, I have to disagree. What I wrote was in support of this dialogue and was completely relevant to the topic of the thread. That being said, it seems that to some degree you support Andrew's position. That is, signature count is an irrelevant measure of vulnerability coverage.
What we do with our rules language, ISS does with their MSRPC / SMB
parsers.
These parsers have just as many or more code paths to handle exactly
what
the Snort rules are doing.
I agree. The parsers have as many or more code paths. However, I do not agree that they handle exactly what the Snort rules are doing. In this specific context of MSRPC and SMB, the parsers are doing much more than what the Snort rules are doing.
It also covers a number of possible evasion techniques. 1. Bind padding 2. Alter Context 3. Write and Read ANDX 4. Unicode Non Unicode 5. Little endian big endian 6. etc...
About the only thing it doesn't cover is SMB and DCERPC fragmentation,
which will be available in snort 2.6.1.
I suspect that we strongly disagree on the efficacy of this coverage.
It is interesting to note that once a proof of concept exploit
became
available, the 300 signatures disappeared and were replaced by a
small
number of signatures to just provide coverage for the known proof of
concept exploits.Incorrect. The initial set of rules included all of the potential connection methods that Microsoft stated the vulnerable service could bind to. During the initial release, we chose to release rules for those connection methods even though our research did not agree. After further research, we were more confidant that Microsoft's initial announcement overstated the connection methods, and therefore reduced the ruleset to the appropriate connection methods.
I agree that I was incorrect with respect to the VRT rulesets. My facts were derived from competitive research we had performed some number of months ago. I believe the information came from examining the Bleeding Snort rulebase. I just checked the latest VRT certified ruleset for registered users. It does NOT appear that all of the rules were replaced by a small number of rules for just proof of concept exploits. It shows that there are still 256 rules for MS05-039.
In the end it's all about methodology. ISS puts all its logic into C modules, while Snort places its functionality in its rules language. ISS handles DCERPC/MSRPC/SMB in C modules that can't be modified by the user or easily validated, while Snort uses open rules and open code to handle the same problems.
Wow. Great marketing spin. However, if you create an SMB and MSRPC preprocessor to handle the fragmentation issues in those protocols, won't you have validated that ISS' decision to place significant parts of its IP in C modules instead of rules was likely correct? Paul -----Original Message----- From: Matthew Watchinski [mailto:mwatchinski () sourcefire com] Sent: Wednesday, April 12, 2006 6:06 PM To: Palmer, Paul (ISSAtlanta) Cc: Basgen, Brian; focus-ids () securityfocus com Subject: Re: IDS vs. IPS deployment feedback Paul, I am the Director of the Vulnerability Research Team at Sourcefire. This puts me in a somewhat unique position to actually respond with facts to the speculation below. You state that Snort uses 300 rules to cover one vulnerability while claiming that ISS uses 1. While this my be true, it is also irrelevant. What we do with our rules language, ISS does with their MSRPC / SMB parsers. These parsers have just as many or more code paths to handle exactly what the Snort rules are doing. In addition Snort's model of using rule-drive vulnerability base, detection provides the end-user with the power to determine exactly what they want to do and the ability to turn on and off individual sub-sections of the detection to suit their networking environment. Additionally you can see and modify any piece of this detection that one sees necessary to modify, giving the end-user a very flexible solution to the problem. Additional Comments In-line Palmer, Paul (ISSAtlanta) wrote:
Brian, I work in ISS' research department. This puts me in a somewhat unique position to answer your question. One example is the signature coverage for MS05-039/CVE-2005-1983. When
the vulnerability was initially announced, the SNORT community (I do not know which exact group created these signatures) added approximately 300 different signatures to provide vulnerability-based coverage for the vulnerability. That is to say, these were not 300 different overlapping signatures from a variety of sources all designed to solve the same problem. These were a single group of 300 signatures designed to work in concert to provide protection against unknown exploits (no known exploits existed at the time that these signatures were added.)
These 300 rules were created by the Sourcefire VRT and were added to detect the possible attack vectors for MS05-039. These rules are auto generated by our MSRPC/DCERPC/SMB rule generator which understands and creates rules for the following: 1. NETBIOS DCERPC NCADG-IP-UDP 2. NETBIOS DCERPC NCACN-IP-TCP 3. NETBIOS SMB DIRECT 4. NETBIOS SMB-DS 5. NETBIOS DCERPC NCACN-HTTP 6. NETBIOS DCERPC DIRECT 7. etc.. It also covers a number of possible evasion techniques. 1. Bind padding 2. Alter Context 3. Write and Read ANDX 4. Unicode Non Unicode 5. Little endian big endian 6. etc... About the only thing it doesn't cover is SMB and DCERPC fragmentation, which will be available in snort 2.6.1.
The fact that 300 signatures were necessary was due to weaknesses of the SNORT engine itself (it doesn't have a proper MSRPC parser), not the research community. Even so, judging from what is lacking in the 300 signatures, it seems extremely likely that the SNORT research community is unaware of all of the different vectors through which the
vulnerability can be exploited since they could have easily added coverage for these had they been aware of them. It also seems likely that the research community is unaware of all of the evasion techniques available via MSRPC and SMB as there are evasions for which
I have never seen SNORT signature coverage.
Interesting. Of course, there is no documentation in ISS's PAM documentation about these additional evasion techniques. Your customers, and the Internet as a whole might like to know about these evasions you describe. Sounds like a good CanSec or BlackHat talk.
It is interesting to note that once a proof of concept exploit became available, the 300 signatures disappeared and were replaced by a small
number of signatures to just provide coverage for the known proof of concept exploits.
Incorrect. The initial set of rules included all of the potential connection methods that Microsoft stated the vulnerable service could bind to. During the initial release, we chose to release rules for those connection methods even though our research did not agree. After further research, we were more confidant that Microsoft's initial announcement overstated the connection methods, and therefore reduced the ruleset to the appropriate connection methods.
ISS, which has proper SMB and MSRPC parsers, needed to add only one signature to provide vulnerability-based coverage for the buffer overflow attack (there is another signature for a related, but different DoS-only vector). Other vendors vary in the number of distinct signatures they require for coverage. However, I have seen none that come close to the ~300 fielded by SNORT. Paul
In the end it's all about methodology. ISS puts all its logic into C modules, while Snort places its functionality in its rules language. ISS handles DCERPC/MSRPC/SMB in C modules that can't be modified by the user or easily validated, while Snort uses open rules and open code to handle the same problems. Thanks Matthew Watchinski Director, Vulnerability Research Sourcefire, Inc.
-----Original Message----- From: Basgen, Brian [mailto:bbasgen () pima edu] Sent: Friday, April 07, 2006 12:28 PM To: focus-ids () securityfocus com Subject: RE: IDS vs. IPS deployment feedback Andrew,some technologies, one signature handles an entire class ofvulnerabilities. Where Snortneeds multiple signatures for the same vulnerability, ISS can protectagainst thevulnerability with 1 signature. TP is the same.Interesting. Can you show me an example of this? I'd like to understand the design differences that lead the snort signature base to be as ineffecient as you describe.ISS, for example, does their own independent security research an hassignatures toprotect against things that Snort people don't even know about.I don't understand how this differs from the Sourcefire Vulnerability
Research Team. Can you provide some details, specific examples, of where the Sourcefire VRT has failed and the ISS research has succeeded? ~~~~~~~~~~~~~~~~~~ Brian Basgen IT Security Architect Pima Community College ---------------------------------------------------------------------- -- Test Your IDS Is your IDS deployed correctly? Find out quickly and easily by testing it with real-world attacks from CORE IMPACT. Go to
http://www.securityfocus.com/sponsor/CoreSecurity_focus-ids_040708
to learn more.
------------------------------------------------------------------------
------------------------------------------------------------------------ Test Your IDS Is your IDS deployed correctly? Find out quickly and easily by testing it with real-world attacks from CORE IMPACT. Go to http://www.securityfocus.com/sponsor/CoreSecurity_focus-ids_040708 to learn more. ------------------------------------------------------------------------
Current thread:
- RE: IDS vs. IPS deployment feedback, (continued)
- RE: IDS vs. IPS deployment feedback Palmer, Paul (ISSAtlanta) (Apr 11)
- RE: IDS vs. IPS deployment feedback Andrew Plato (Apr 13)
- RE: IDS vs. IPS deployment feedback Kyle Quest (Apr 13)
- RE: IDS vs. IPS deployment feedback Palmer, Paul (ISSAtlanta) (Apr 13)
- Re: IDS vs. IPS deployment feedback Paul Schmehl (Apr 15)
- RE: IDS vs. IPS deployment feedback Cojocea, Mike (IST) (Apr 13)
- RE: IDS vs. IPS deployment feedback Gary Halleen (ghalleen) (Apr 13)
- Re: IDS vs. IPS deployment feedback Randal T. Rioux (Apr 18)
- Re: IDS vs. IPS deployment feedback Frank Knobbe (Apr 13)
- RE: IDS vs. IPS deployment feedback Basgen, Brian (Apr 13)
- RE: IDS vs. IPS deployment feedback Palmer, Paul (ISSAtlanta) (Apr 15)
- RE: IDS vs. IPS deployment feedback Biswas, Proneet (Apr 15)
- RE: IDS vs. IPS deployment feedback Palmer, Paul (ISSAtlanta) (Apr 15)
- RE: IDS vs. IPS deployment feedback Mark Teicher (Apr 15)
- RE: IDS vs. IPS deployment feedback PPowenski (Apr 19)
- Re: IDS vs. IPS deployment feedback virtuale (Apr 21)