IDS mailing list archives

Re: SourceFire RNA


From: Jason <security () brvenik com>
Date: Tue, 02 Dec 2003 17:27:57 -0500

This is a long post, I hope it illustrates why the initial question of

"It is difficult for me to understand how can passive traffic analysis detect inactive devices and services which do not transmit any network traffic?"

is not really a concern when you consider the pros and cons of active probes and passive analysis as it relates to threat management and change management.

The concern is that an inactive host is a greater threat to your network and the implication is that an active probe will flush these out. This is simply not true. For a host to be truly inactive it would have to not ARP, never broadcast, and never respond to a probe... This just does not happen in the real world unless it is a host designed to be stealthy. The reality is that the only inactive host on a network is in fact not on the network, it is off, disconnected, or a system that cannot be seen by design. In these cases only passive technologies will help when the help is needed.

Consider the following questions relative to hosts.

* If an entity is mute is that by design or default?

  - by design implies a specific purpose, that purpose could be for good
    or bad.

  - by default implies that it is either off or not connected.

* If it is by design is that a threat?

  - Outside of a stealth monitoring technology in support of some
    business function it is a serious threat.

* Can active probes identify this entity?

  - Possibly, if a scan happens when it is on.
    ( unlikely in most cases )
  - No, if it is muted by design.
  - No, if it is off.
  - No, if it is stealthed.
     ( likely by design and in support of a business function )

* Can passive observation identify the entity?

  - Yes, when it does anything that could be a threat
  - Yes, when it gets turned on
  - No, as long as it remains stealthed.
     ( likely by design and in support of a business function )

Now for service discovery things are a little different. Active probes _may_ be able to identify a service and a specific vulnerability faster if that service is normally inactive on the network but still active on the host. This only applies to the entities that can be discovered through an active probe. The unfortunate thing is that because these are "known" entities which are discoverable through an "active probe" it implies that we have no better visibility into the threat than simply knowing that the system is there and configured according to the norm. If it is not configured according to the norm then it is potentially a greater threat and an active probe will not help in this case.

On the other hand a passive discovery technology provides the same and better information through inference by identifying the known and unknown entities and correlating that with the known norm. This can happen without introducing the treat or burden of an active probe.

Then we get into the dangers of false positives with active probes, the only way to know for sure that a system is vulnerable is to successfully exploit the system, this is unrealistic in any environment with the possible exception of a research lab. I refer to the most recent example, MS DCOM vulnerabilities. Active probes could ( if allowed ) identify a service that was un patched by either checking for the existence of a registry key ( prone to false positive ) or eliciting a response by making a specific request. Any system not patched would not reply with the expected response and as a result you could identify the service having been patched. There are a few problems as it relates to the stated goal of vulnerability management here.

- Checking the registry requires administrative privilege, this is in essence advertising the administrative credentials to everyone that is a recipient of the probe.

- Even with administrative privilege false positives were possible, this is because under certain circumstances the patch failed to apply correctly.

- Attempting to elicit a specific response only identified that a patch had been installed, if alternative methods of resolution were taken like disabling DCOM then the check was ineffective and inaccurate.

- It resulted in a false sense of security for many because the patch was ineffective and resulted in a 100% false negative for any integrated system that relied solely on this information for vulnerability management.

The passive approach was able to identify, with a high degree of certainty, the likely vulnerable systems before patching even began, it was able to identify the change in behavior even though the host was supposed to have been patched... In this way you can foresee possible and actual vulnerabilities without ever touching the host directly. With this information you can target your response to the high risk systems and handle the situation more effectively.

Next we have evasion, it is trivial to evade any active probe, especially routine ones. When we start thinking about threat management this scenario is an even greater concern. An attacker can easily evade and active probe from scanning machines and continue to provide services.

I hope I have illustrated why passive is the best way to go when considering the true threats and the alternatives.


Renaud Deraison wrote:

On Tue, Dec 02, 2003 at 10:46:48AM -0500, Rob Shein wrote:

The answer to this is simple.  All machines make some kind of noise on the
network, from an IDS-centric view.  If the machine doesn't have any
interaction, ever, with anything, then it's not really important from the
IDS point of view, because it can't be breached WITHOUT interaction.  Even
if the first traffic involving that machine is an attack or scan, at that
point the machine becomes at least as visible to the IDS as it is to the
attacker.


Waiting for an attack is not necessarily a good strategy either - just think
about all the worms that have been plaguing our last summer vacations
these last few years.

Reactive security practices simply don't work. If the host does not
interact with the rest of the network, that does not make it more begign
than any other one on the network - quite the contrary actually, as it
suggests that it never downloaded any patch.


Vulnerability management/discovery/patching is but a piece of the puzzle and is by nature a reactive response. It does not matter if it is an active or passive discovery model, it is still reactive. It may be a blind attack for a service that a passive technology had not seen used before or that the service was known some time ago through use of active probes. Either way sufficient information was available for use before the attack to determine that a threat likely exists. With passive technology it is assured that the most up to date information is readily available and if a change occurs it is flagged as soon as it happens, not on the next round of scans.



                                -- Renaud

---------------------------------------------------------------------------
---------------------------------------------------------------------------



---------------------------------------------------------------------------
---------------------------------------------------------------------------


Current thread: