IDS mailing list archives
RE: IDS vs. IPS deployment feedback
From: "Biswas, Proneet" <pbiswas () ipolicynetworks com>
Date: Thu, 13 Apr 2006 15:49:42 -0700
Hi Brian, Another good method to qualify a system would be to judge if a particualr signature is targetting the exploit or the vulnerability which the exploit is targetted for. Example: Lets take a simple buffer overflow case where a particular FTP user name if more than 200 characters would allow user to execute code. The code which would be executed would depend on the actual code which is passed on after the buffer of 200 characters. False negative side ---------------------- If the user has written specific signatures for the executabel shell code, then the chances are his false positive is very low, but then there are so many exploit code combinations, that he could miss out on one of them and thus miss out an actual exploit. Tools like metasploit allow you to test the combination of vulnerability plus payload to detect what kind of IPS signature is impelmented. False postive side -------------------- If the user has written a singature to actually test the buffer overflow, then the chances of his getting false negative are low as he would catch all the buffer overflow cases irrespective of the executable code. However, this would increase his false positive scenario as even if the code after the 200 characters is not executable code, it would still block or generate an alert as configured. Thanks Proneet Biswas. ------------------------------------------------------------- I find that the harder I work, the more luck I seem to have -----Original Message----- From: Basgen, Brian [mailto:bbasgen () pima edu] Sent: Monday, April 10, 2006 3:10 PM To: focus-ids () securityfocus com Subject: RE: IDS vs. IPS deployment feedback Paul, Thanks for your response. I'd love to hear you qualify differences a bit more. Every IPS ships in "silver bullet" mode with a certain set of recommended protections activated -- the understanding being that these signatures have extremely low false positives. Yet, these IPS have a larger signature base that, if enabled, can stop both threats and normal traffic. Naturally, they aren't enabled because the product is, after all, a silver bullet; like your ISS Proventia claims. ;) I think metrics would be interesting here -- whether numeric or qualitative. You explained poor SMB and MSRPC parsers in snort, and that is interesting data. While I'm interested in getting the details as to where Snort is imperfect, I'm also interested in getting better qualitative data on the IPS/IDS divide. How much can the IPS drop without false positives, versus how much can an IDS detect (with, of course, false positives). Put in another way, how many false negatives can get through a default IPS? ~~~~~~~~~~~~~~~~~~ Brian Basgen IT Security Architect Pima Community College -----Original Message----- From: Palmer, Paul (ISSAtlanta) [mailto:PPalmer () iss net] Sent: Monday, April 10, 2006 1:38 PM To: Basgen, Brian; focus-ids () securityfocus com Subject: RE: IDS vs. IPS deployment feedback Brian, I work in ISS' research department. This puts me in a somewhat unique position to answer your question. One example is the signature coverage for MS05-039/CVE-2005-1983. When the vulnerability was initially announced, the SNORT community (I do not know which exact group created these signatures) added approximately 300 different signatures to provide vulnerability-based coverage for the vulnerability. That is to say, these were not 300 different overlapping signatures from a variety of sources all designed to solve the same problem. These were a single group of 300 signatures designed to work in concert to provide protection against unknown exploits (no known exploits existed at the time that these signatures were added.) The fact that 300 signatures were necessary was due to weaknesses of the SNORT engine itself (it doesn't have a proper MSRPC parser), not the research community. Even so, judging from what is lacking in the 300 signatures, it seems extremely likely that the SNORT research community is unaware of all of the different vectors through which the vulnerability can be exploited since they could have easily added coverage for these had they been aware of them. It also seems likely that the research community is unaware of all of the evasion techniques available via MSRPC and SMB as there are evasions for which I have never seen SNORT signature coverage. It is interesting to note that once a proof of concept exploit became available, the 300 signatures disappeared and were replaced by a small number of signatures to just provide coverage for the known proof of concept exploits. ISS, which has proper SMB and MSRPC parsers, needed to add only one signature to provide vulnerability-based coverage for the buffer overflow attack (there is another signature for a related, but different DoS-only vector). Other vendors vary in the number of distinct signatures they require for coverage. However, I have seen none that come close to the ~300 fielded by SNORT. Paul -----Original Message----- From: Basgen, Brian [mailto:bbasgen () pima edu] Sent: Friday, April 07, 2006 12:28 PM To: focus-ids () securityfocus com Subject: RE: IDS vs. IPS deployment feedback Andrew,
some technologies, one signature handles an entire class of
vulnerabilities. Where Snort
needs multiple signatures for the same vulnerability, ISS can protect
against the
vulnerability with 1 signature. TP is the same.
Interesting. Can you show me an example of this? I'd like to understand the design differences that lead the snort signature base to be as ineffecient as you describe.
ISS, for example, does their own independent security research an has
signatures to
protect against things that Snort people don't even know about.
I don't understand how this differs from the Sourcefire Vulnerability Research Team. Can you provide some details, specific examples, of where the Sourcefire VRT has failed and the ISS research has succeeded? ~~~~~~~~~~~~~~~~~~ Brian Basgen IT Security Architect Pima Community College ------------------------------------------------------------------------ Test Your IDS Is your IDS deployed correctly? Find out quickly and easily by testing it with real-world attacks from CORE IMPACT. Go to http://www.securityfocus.com/sponsor/CoreSecurity_focus-ids_040708 to learn more. ------------------------------------------------------------------------
Current thread:
- RE: IDS vs. IPS deployment feedback, (continued)
- RE: IDS vs. IPS deployment feedback Andrew Plato (Apr 13)
- RE: IDS vs. IPS deployment feedback Kyle Quest (Apr 13)
- RE: IDS vs. IPS deployment feedback Palmer, Paul (ISSAtlanta) (Apr 13)
- Re: IDS vs. IPS deployment feedback Paul Schmehl (Apr 15)
- RE: IDS vs. IPS deployment feedback Cojocea, Mike (IST) (Apr 13)
- RE: IDS vs. IPS deployment feedback Gary Halleen (ghalleen) (Apr 13)
- Re: IDS vs. IPS deployment feedback Randal T. Rioux (Apr 18)
- Re: IDS vs. IPS deployment feedback Frank Knobbe (Apr 13)
- RE: IDS vs. IPS deployment feedback Basgen, Brian (Apr 13)
- RE: IDS vs. IPS deployment feedback Palmer, Paul (ISSAtlanta) (Apr 15)
- RE: IDS vs. IPS deployment feedback Biswas, Proneet (Apr 15)
- RE: IDS vs. IPS deployment feedback Palmer, Paul (ISSAtlanta) (Apr 15)
- RE: IDS vs. IPS deployment feedback Mark Teicher (Apr 15)
- RE: IDS vs. IPS deployment feedback PPowenski (Apr 19)
- Re: IDS vs. IPS deployment feedback virtuale (Apr 21)