IDS mailing list archives

RE: Intrusion Prevention


From: "Brian Laing" <Brian.Laing () Blade-Software com>
Date: Tue, 14 Jan 2003 11:48:44 -0800

I would just like to add my .02 to this thread.

While there are different testing methodologies out there used by
various reviewers, consulting organizations, and vendors, what seems to
be missing when we talk about these methodologies is breaking them down
into the 4 main factors of testing, the first three focus mainly on the
sensor itself, with the 4th targeting the management application.  
        
1. Speed, this is the speed at which the IDS can process
packets, this is not it can still detect X attacks at Speed y but more
when does it start dropping packets.
        
2. Coverage what can it detect; this covers basic attacks,
fragments, evasion techniques etc.
        
3. Combinations of Speed and Coverage, at what speed does it not
drop packets but it Does start to miss attacks.
        
4. Management application, this is one of the bigger and
potentially even more difficult areas to test than the preceding three,
as it can encompass quite a bit, and requires a level of operational
experience that many people who do reviews will not have.  This sort of
testing needs to cover not only alert management, but sensor management,
all of which can be VERY subjective in terms of what works for one
individual may not work for another.  It also needs to cover some areas
of performance as well, how does the management application handle the
sensor that is pegged at 100% and sending up LOADS of alerts!!

Knowing a test is stronger or weaker in these categories can be
of a great service to the end customer and can dramatically impact their
IDS purchase.  For example if Management of the sensor is key for me,
then I am going to want to see reviews/tests done with a slant towards
that.  Personally in the past having done a lot of large IDS sensors
deployments (some as large as 2000 sensors) I have had customers who
cared more or less of different areas of testing.  I had one customer
who cared more about the ability to manage the sensors alerts, as even
if the sensor had 100% accuracy, if he could not manage the alerts it
was as good as not detecting the event.
        
While doing these implementations and while working in an IDS vendor I
saw that  the main focus for a lot of people is the attack coverage
itself.  This is often due to the fact that it  can be the easiest and
quickest
statistic for people to understand.  It also partially due to attack
coverage is something that can and does need to be done on a sensor once
it is in production to assure that it is working, specially after an
update!

This focus on attack coverage is one of the main reasons Blade
Software was founded, to develop and offer software that can be used to
Test and measure the attack coverage of an ids, and other security
technologies

I would like to mention a couple of points on this list as there seems
to have been some inaccurate statements made about IDS Informer
recently.  Initially when the application was first released it was
designed to trigger on basic non stateful IDS's and therefore some of
our attacks did not have the full 3 way handshake causing issue for some
IDS products. When this was highlighted as an issue we began work on
developing the 2nd generation of attacks for the product, While this
work was taking place IDS Informer was used by the NSS labs for the IDS
review group test which was published last year, to ensure complete
accuracy NSS created and complied their own set of attacks which were
used in the test to ensure that there could be no confusion.

In November we released the 2nd generation of attacks which are all 100%
accurate and fully complete. The 2nd generation of attacks also has
successful and unsuccessful versions of each attack where possible

        
To assure this accuracy continues we are creating an attack reference
library where we have each exploit in compiled form, uncompiled form,
the system images used to launch the attack as well as the targets for
each attack, and various other items collected as part of the test such
as what changed on the target or how did the target respond to the
attack.  This has all been done so that anyone doing testing can use
Blade Software for any test requiring Attack Coverage as a component.

To facilitate access to this repository we have put together a Vendor
Alliance program and we are inviting each IDS vendor to participate If
you are interested in joining this alliance or you have questions etc.
on our methods or attacks, please feel free to drop me an email, or give
me a call.

Cheers,
Brian


************************** 8th ANNUAL SC AWARDS ************************
                        ++++ VOTE IN THE SC AWARDS 2003 ++++
With over 800 nominations in the 2003 Awards programme, this year's 
event is bigger and better than ever before. Online voting begins on 
25th January, so make a date in your diary and be a part of this 
major global event.
http://www.scmagazine.com/awards
************************************************************************

-------------------------------------------------------------------
Brian Laing
CTO
Blade Software
Cellphone: +1 650.280.2389
Telephone: +1 650 367.9376
eFax: +1 208.575.1374
Blade Software - Because Real Attacks Hurt
http://www.Blade-Software.com
-------------------------------------------------------------------


-----Original Message-----
From: Golomb, Gary [mailto:GGolomb () enterasys com] 
Sent: Monday, January 06, 2003 1:08 PM
To: Rick Williams; focus-ids () securityfocus com
Subject: RE: Intrusion Prevention


See below.

-----Original Message-----
From: Rick Williams [mailto:rickwi () hotmail com] 
Sent: Wednesday, December 25, 2002 2:30 PM
To: focus-ids () securityfocus com
Subject: Re: Intrusion Prevention

{snip}

...and I personally would not be putting ANY 
product forward to be considered until I had seen what the NSS guys had
to 
say about it.

I am hoping that both Netscreen and Sourcefire will be in the next
edition 
and I have to say that Dragon was off our list of IDS for ANY speed of 
network some time ago due to its constant omission from these reports
(you 
don't have to pay for the 100Mbit IDS reports, they are all on-line for
free 
in full).

EOF
}}}}}} Comments below:


Rick, et al.:

Hi there!

This thread has already brought up some interesting points over the past
two weeks, so I'll make an effort to avoid repeating what has been
stated already. We've all got enough email to catch up on from the
holidays without being redundant! I just wanted to address the comment
made above. 

About someone saying they wouldn't look at Dragon, or ANY IDS for that
matter, because they repeatedly do not participate in a specific test,
is a little un-nerving. Sure, there are industry testing standards which
are undoubtedly the most inclusive and open to peer analysis, such as
OSEC, which I would also be suspicious of anyone who refuses to
participate in. (We can revisit this point later.) But to take a product
that continuously participates successfully in other magazine and
third-party tests, but does not participate in one (or another) specific
test... Well, for me that would raise more red-flags about the test than
the product. If Dragon (or **any** IDS) refused all the tests publicly
available, then of course we could draw some conclusions like you have
alluded to. Please realize that I am not the official spokesperson for
these types of subjects. Just someone who's close enough to have some
thoughts on it... 

There are several issues at work when we discuss testing. First of all -
and most generically - is that getting involved in a test is very time
consuming. You really don't think the people who are doing the testing
are actually taking a few months to learn the ins-and-outs of each
product before testing it, do you? Of course not! I've only seen it done
once, and it was by the same guys who wrote much of the OSEC testing
standard. (Incidentally, that test took almost a year to complete!)
Anyways, for every test we participate in, we need to send people from
our Dev, QA, or R&D teams to assist on-site with the testing cycle.
Those are resources that are being taken away form their normal schedule
to assist the testing team in question. Since there are MANY tests
completed every year, we have to carefully choose which to participate
in, and which not to. 

Time and resources are not the only factor involved in selecting which
tests to (or not to) participate in. Generally, we are given a
description of the testing methodology upfront. Believe it or not,
sometimes we're told that we cannot see the testing methodology upfront.
This dumbfounds me for all the reasons that MJR (and others) already
brought up. IDS testing is too easy to inadvertently (and sometimes
intentionally - read: Miercom's test of Intrusion.com) skew the results
of any test. When I think of testing, I think of scientific process.
Unfortunately, many of the IDS tests you read each year do not adhere to
any sort of process, much less an actual scientific methodology. If a
third-party testing group tells us that we are not allowed to view the
test process we'll be subjected to, then we will probably reject the
offer to test - because of all the terribly flawed test plans we have
already seen to date. For many tests we have seen, the ENTIRE test plan
reflects the tester's understanding of one particular facet of IDS
technologies/methodologies (or worse, lack thereof). This is not a bad
thing, if the test is billed as only testing that acute area in the
first place. Unfortunately, they frequently do not. 

This is one of the greatest promises of OSEC. It was written by a large
group of people who have backgrounds in network-hardware performance
testing, blackbox testing, pen testing, IDS development, IDS evasion
development, etc... Additionally, if there is something that you or I
don't like about it, there is an entire community there to hear ideas
for improvements, not just one or two people from a privately-held
organization. And that's one of the most significant differences to
begin with! Every aspect of the test is available to anyone who wishes
to see it, and if you don't like what you see, you can get on here (or
go to them directly) and speak you mind about the test, results, or
methods. They are helpful and take your input very seriously. At least,
that's the experience I've had.

Now, as far as the NSS test goes... This is not a free test. Not only
does each company have to pay for the tests, they have to pay
additionally for the reports generated by the tests. While the reasons
for this are reasonable, this alone should raise some flags about the
agenda of companies that drive marketing campaigns on results form tests
like these. I'll stop here on this one. 

Also - and more importantly, there have been issues with NSS testing
methodologies. Rather than have my slanted (and VERY strong) opinion on
the subject, look at the tools they use to implement their tests, then
do a search on lists (like this one) to see some of the pros/cons of
using those tools. Put those individual discussions together, and you'll
get a more clear view of the bigger picture here.

Anyways, I think the point has already been made in other emails, but...
Don't base your decisions exclusively on one test - it's too easy to
introduce significant testing methodology flaws into a test; ***DO
NOT*** solely base your decision on test results that are given to you
from a vendor; and if you have any doubts - test it yourself or ask for
other end-users' experiences on a list like this. There are things like
stability, support, and the ability to effectively integrate into your
environment that frequently cannot be discovered without your own
testing. Hopefully the vendors' will respect your question enough to not
skew the conversation. (Right Simon?!)

Anyways, just some thoughts...

-gary


Gary Golomb
Detection Research Engineer
IDS Group
Enterasys Networks
410-312-3194 


Current thread: