IDS mailing list archives

Re: OSEC [WAS: Re: Intrusion Prevention]


From: Randy Taylor <gnu () charm net>
Date: Fri, 03 Jan 2003 10:09:08 -0500

At 11:34 PM 12/30/2002 -0500, mjr wrote:
Greg Shipley writes:
>However, with that said we've noticed over time that a) various industry
>"certification" efforts were watered-down b) many of those efforts were
>more or less irrelevant, and c) there as a definite need in the industry
>for a trusted 3rd party to validate vendor claims with REAL testing.

Testing is a fundamental problem with all products, and always
has been. What customers want is someone to tell them what
sucks and what doesn't - while still providing enough facts
that they can have at least a minimal understanding of what is
going on. As you know, establishing a truly deep understanding
requires a huge investment in time - more than virtually anyone
is willing to make. That's why even some testing groups have
been fooled in the past. For example, witness Meircomm's snafu-ed
test of Intrusion.com's product. A lot of customers and industry
analysts (and probably some people at Meircomm) were fooled by
that rigged benchmark.

*chuckle* That "benchmark" was one for the record books.

[snip]

What I gather you're trying to do with OSEC is test stuff and
find it lacking or not. Basically you want to say what products
you think are good or bad - based on your idea (with input from
customers and vendors) of good and bad. Of course, if I were a
vendor, I'd skewer you as publicly and often as possible for
any bias I could assign you. Because your approach is inherently
confrontational. Back when I worked at an IDS vendor, I tried to
talk our marketing department out of participating in your
reviews because, frankly, the vendors are forced to live or
die based on your _opinion_. That's nice, but we've seen before
that opinions of how to design a product may vary. Many industry
expert types "Don't Get This" important aspect: products are
often the way they are because their designers believe that's how
they should be. Later the designers defend and support those
aspects of their designs because that's how they believe their
products should be - not simply out of convenience. The egg
really DOES sometimes come before the chicken. :)

That's exactly right. A poor review from one or more of NSS, Miercom,
or Neohapsis is bad news from a marketing perspective. Sales
numbers spike or dip shortly after a review's release date.

[snip]


So here's the problem: how do you test a product without holding
any subjective beliefs in your criteria? Man, that's hard. I wish
you luck. (By the way, I've noted a very strong subjective preference
on your part to Open Source solutions, most notably Snort. You've
consistently cast their failures in kinder light than anyone
else's, etc... So be careful...)

Back when I worked for Enterasys, the scuttlebutt I'd hear from
others was that Neohapsis was always biased towards Dragon,
not Snort. Snort And Nothing but Snort (aka SANS) supposedly
had the bias for Marty and his crew. Looking back at it now
outside-in, I think it was a matter of perspective, mostly skewed.

[snip]


So, I think there's a place for *ALL* these different tests
and it's a bad idea to throw mud at any of them. Honestly,
I think that a smart customer should do their own. It's
flat-out INSANE to spend more than $100,000 on a product
without doing an operational pilot of it and 2 competitors.
Yet, many companies do exactly that. They get what they
deserve.

For the most part, I'd agree. All of the tests have a purpose.
And yes, smart customers will do their own speeds and feeds,
but only if their design and deployment schedules permit it.

Because those schedules are often very tight, speeds and feeds
are left to be revealed by one of OSEC, NSS, or Miercom.
The subjective elements, such as the quality of information delivered
to the front-line analysts, overall product ease-of-use, manageability,
and scalability, among others, are the major differentiators at the
end-customer testing/recommendation/product selection levels. A
cost-benefit analysis, which has both subjective and objective
elements, is usually the final discriminator when two or more products
come through tests close together. Even then, I've seen many cases
where the customer just likes a particular product, regardless - their
bias alone takes every possible selection criteria and boils them
into steam.

Schedules would allow for end-to-end testing in a perfect world, but
its a rare day when the real world lines up with the ideal.

Thanks, as always, for another good read, Marcus.


mjr.

Best regards,

Randy
-----
"Life was simple before World War II. After that, we had systems."
--- Admiral Grace Hopper  --



Current thread: