IDS mailing list archives

Re: OSEC [WAS: Re: Intrusion Prevention]


From: George Capehart <gwc () capehassoc com>
Date: Sat, 04 Jan 2003 13:48:25 -0500

Randy Taylor wrote:


<snip>


That's exactly right. A poor review from one or more of NSS, Miercom,
or Neohapsis is bad news from a marketing perspective. Sales
                                                         ^^^^^
numbers spike or dip shortly after a review's release date.
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

<snip>

At 11:34 PM 12/30/2002 -0500, mjr wrote:

So, I think there's a place for *ALL* these different tests
and it's a bad idea to throw mud at any of them. Honestly,
I think that a smart customer should do their own. It's
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
flat-out INSANE to spend more than $100,000 on a product
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
without doing an operational pilot of it and 2 competitors.
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Yet, many companies do exactly that. They get what they
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
deserve.
   ^^^^^^^  !!!!!


For the most part, I'd agree. All of the tests have a purpose.
And yes, smart customers will do their own speeds and feeds,
but only if their design and deployment schedules permit it.

Because those schedules are often very tight, speeds and feeds
are left to be revealed by one of OSEC, NSS, or Miercom.
The subjective elements, such as the quality of information delivered
to the front-line analysts, overall product ease-of-use, manageability,
and scalability, among others, are the major differentiators at the
end-customer testing/recommendation/product selection levels. A
cost-benefit analysis, which has both subjective and objective
elements, is usually the final discriminator when two or more products
come through tests close together. Even then, I've seen many cases
where the customer just likes a particular product, regardless - their
bias alone takes every possible selection criteria and boils them
into steam.

Schedules would allow for end-to-end testing in a perfect world, but
its a rare day when the real world lines up with the ideal.

And herein lies the rub, IMHO.  It seems to me that this is yet another
example of "problems" that are being discussed (slightly off-topic)
right now in a thread on firewall-wizards (Subject: Anybody Recognize
These Uploads?).  Seems to me, the central theme could be titled:  How
serious are people really about being an educated consumer and how
serious are you about managing the risks you have decided you want to
manage.  It seems reasonable enough to use published product evals, etc.
as the basis for creating a short list of candidate products (or
whatever), but, as one of my college professors was constantly reminding
me, "You don't know until you have the data."

I'm with mjr.  One of the basic tenets of risk management is that one
does not spend more to manage the risk than it would take to recover
from the harm caused by an event that exploited the risk if it were left
unmanaged.   IDSs are one component in a defense-in-depth strategy.  No
matter what the cost, if an IDS is deemed an appropriate component of
the set of necessary controls, this sort of implies that the asset that
is being protected is pretty valuable.  If the risk assessment provides
justification for spending $100K US on an IDS, "IPS" or whatever, that
raises the value of the asset considerably.  To me, to choose to
implement a component that is protecting an asset of that value without
knowing */exactly/* what its operating parameters are going to be in the
production environment *is* insane.  And if some pointy-haired manager
pushed back and said there wasn't enough <insert excuse-of-the-day
here>, you can bet your bippy that the fact that the operating
characteristics of this component are unknown will be identified as a
residual risk that would be discussed frequently throughout the
certification and accreditation process and would show up prominently as
residual risk in the ST&E reports, the risk assessment report and the
certifier's statement.  So if the system is accredited, it will be
because the accrediting authority was willing to accept the risk.

Now, one could ask, what is the difference between this case and the
"real world" ones identified above?  Is this not yet another case of a
consumer being willing to remain uneducated?  Well, maybe it is, and
maybe it isn't.  To the extent that the business owner of the system is
willing to ignore the advice of those he pays to give him advice in
these matters, some might say, yes, it is.  But, to the extent that
he/she has formally, in writing, with signature, accepted the risk, the
accreditor has therefore publicly accepted accountability for the
potential harm.  If the business person who owns the system and the
accountability for its function formally accepts the known risk, so be
it.  He/she has made and informed business decision.  So I'm inclined to
take the position that, in this case, the accreditor *is* and educated
consumer.  He/she was apprised of the risk and accepted it in writing.

Seems to me the challenge for us in the trenches is in "managing up." 
As a propeller-head, I'd like to be able to make well-researched and
well-informed decisions and recommendations.  The problem is that there
are usually a couple of layers of pointy-haired managers between the
propeller-heads that actually do stuff and the business owner of a
system.  So, in the absence of some kind of mechanism to alert the
system owner of the risks that are incurred as a result of decisions
that are made at lower levels, the system owner remains ignorant (and
therefore vulnerable).  IMHO, a certification and accreditation process
helps everyone manage their risks . . . 

My 0.02 USD.

--
George W. Capehart

"We did a risk management review.  We concluded that there was no risk
 of any management."  -- Dilbert


Current thread: