IDS mailing list archives

RE: Intrusion Prevention Systems


From: "Dominique Brezinski" <dom () decru com>
Date: Wed, 6 Nov 2002 18:07:49 -0800

Whoa--it is a long time since I participated on one of these forms.

I read a statistical sampling of this thread, and it is nice to see that
there are people that get it and are open to dialog in this space.  The
following is intended to be a bit controversial, so take it as such.  Argue,
with civility, if you disagree.  To start, some really good points have been
made already:

-those organizations that are likely to have the expertise to successfully
deploy and tune an adaptive IDS or IPS product are *exactly* those
organizations that are successfully patching their systems, deploying good
security architectures, and are generally immune to the common set of know
attacks.  I think the author used the term isomorphic to describe the
relationship.  I have yet to see a classic NIDS provide any return on
investment for such organizations, and it is possible to actually quantify
the losses associated with the deployment and operations of such systems.
The services that are valuable to these organizations are those that give
them early indication that problems are coming, identify new threats they
may not be immune to, and allow them to cost-effectively respond to issues
that do affect them.

-network-based active response systems are generally *reactive* by design,
meaning that their responses occur after the traffic has reached the target
system.  In the case of buffer overflows and similar exploits, that is too
late in the game (someone pointed out jill.c exploit as example).  In the
case of traffic floods, it is fine since a successful attack has to be
on-going.  Network systems functioning as a bridge can prevent the traffic
from reaching the target system, but that means they introduce latency.  So
for some attacks, the amount of data that would have to be queued to
recognize the attack and prevent it from affecting the target is absurd.
The engineering constraints on such a system preclude a comprehensive useful
intrusion detection/prevention product.  That is without taking encryption
and other hindrances into account.  There are exceptions; when the goal of a
network-based system is analyze and respond to certain *classes* or traffic,
these systems can be very effective.  I am referring to products that look
for DoS attacks, network recon, and the like.  Deployed at the proper place
in the network, at the ISP in most cases, these systems provide necessary
functions.  However, these systems are not *intrusion* detection/prevention
systems; they are DDoS detection/prevention systems and global warning
systems.

There are algorithmic approaches that show great promise in the
identification of new/novel attacks, some even being implemented by a small
number of startup companies, and these approaches are adaptive and many
include some form of behavioral analysis, data-mining, or other statistical
modeling.  However, one can argue that in search of application-level
vulnerabilities, the data present on the network is a less-than-stellar data
set.  A wise friend pointed out that maybe, just maybe, our problems in
attack recognition are more about the base data set we work with than the
way in which we analyze it.  His point is that there are many techniques
that can be applied to good data to find interesting conclusions, but few
techniques will reliably find valuable results from bad (or inappropriate)
data.  Quite honestly I have seen some very cool techniques applied to
network header analysis that identified interesting security-relevant things
happening on the network, but often the header that ends up being the
indicator causing suspicion is one of multiple that are associated with the
attack.  Identifying the right traffic to be dropped is a needlessly complex
problem.  However, these same techniques applied on host, where the possible
responses can be quite granular, can be very effective at preventing the
progression of an attack without rampantly stepping all over legitimate
interactions.

A philosophic example is that an application tends to act in a statistically
predictable way under normal operation (subject of several doctorates).
There are data sets (i.e. system and library calls) that are very closely
bound to the application's behavior that, when analyzed correctly, are
reliable indicators of the application's action.  The data set is tightly
bound to the resultant action of the application.  Given an interposition
between the application and the resources of interest on the system,
response to an abnormal behavior by an application can be quite
granular--say blocking access to a certain file or socket or preventing the
network transmission of certain data.  It has been keenly demonstrated that
an application's behavior is only loosely bound to network data as viewed
outside the host system (Ptacek and Newsham's paper for one), with many
variables affecting the application's receipt of and response to the data.
The complexity of interpreting the network data in a way that reliably
correlates with the end behavior of the application is high and error prone.
Why use such data?

It is about time that we recognize what should be done on the network versus
host.  Application and OS service attack recognition and response is best
left for host-based systems.  The indicating data available is tightly bound
to the actual action of the process, and the response mechanisms available
have a broad range to their invasiveness.  Network traffic event monitoring
is inherently network-based, and the response to traffic floods requires the
action of network devices in the traffic path.  Capture of network traffic
to a compromised or suspicious host can be of great benefit to an incident
responder and a good source of forensic evidence.  Also, data mining applied
to captured network traffic can be a good way to discover new recon and
attack techniques, and in the event a host-based system catches a new
attack, the captured data can provide good insight into the attack vector
(assuming the attack didn't occur over HTTPS or similar).

Of course this was just a cursory overview of this ideal.  And it is just th
at, a philosophy for philosophers to live by (a direction for research).  In
the trenches, if a given application presents a cost-effective solution then
that is what you go with.

Dominique Brezinski


Current thread: