Penetration Testing mailing list archives

Re: Rooting out false positives


From: Omar Herrera <oherrera () prodigy net mx>
Date: Tue, 19 Jul 2005 08:50:33 -0500

Hi Javier,

There's a slight issue with your approach, in writing it seems ok, but
when you are "out there" it turns out many people are using mixed
equipment which can lead to different (TCP/IP) fingerprints, ttls,
etc. due to the network side of things. The problem is, an IP address
does not any longer identify an end host you can have all sorts of:

- NAT tricks, this includes network balancer tricks (aka Alteon and
similar equipment, which will redirect different ports to different
servers)

- Reverse proxies, in which the service banner and the identified OS
(via fingerpringint) do not match for a given IP address. These might
be as simple as a port redirector and as complex as an inline
application level firewall. Nowadays, many firewalls are
reimplementing these features but rebranding them (e.g. Check Point's
Firewall-1 NG "Application Intelligence"). In these cases you might
even have conflicting identifications, in some cases the answer is
returned by the end server, in other's it's the firewall's answering
that it blocked an attack attempt. Inline IPS or IDS with blocking
will have a similar impact in the identification phase.

I concur, it has been increasingly difficult to fingerprint the O.S. of a server over time.
Many times however you can at least identify the brand of the filtering device (firewall, proxy, etc.).

To get around this problem I have found much easier to target the application level and work downwards the levels of 
the OSI model. Even proxies with the most tight normalization rules will leak behavior that is distinctive of a 
specific application or service (many times you can elicit these responses through valid requests that won’t be 
filtered). 

With the application brand and version identified, you can sometimes make a good guess of the underlying O.S., but with 
tight filters in place we would be more concerned with filter evasion than with identifying that underlying O.S.

One of the worst case scenarios here, that I can think of, would be a server with a single, home-developed, proxied, 
service. With such things however, a pentester might just chose to start manual pentest procedures and spend more time 
there, instead of shooting at it everything that the vulnerability scanner has to offer; or at least make educated 
guesses and activate only relevant signatures in the vulnerability scanner, hoping that some clues will show up.

As to how to flag false positives, here's my 2c (which is quite
similar to what you already said)

- if the scanner provides information that couldn't have been obtained
without exploiting the vulnerability (like an internal IP address, a
file content, etc.) that would prove it's not a false positive.

Definitely, that’s a good point. Many scanning scripts provide results that  clearly discard the possibility of false 
positives. 

Others, on the other hand ,are not so easy to validate. E.g. there are some scripts that report something like: “I did 
not get a response from service X, therefore, this might be an indication the it is vulnerable to …”. Even if you 
confirmed that the port was open and responsive before, it might just be a filtering device that blocked your scan.

Scanners getting automatically blocked for being too noisy are also a common cause of false positives. This situation 
causes big headaches for any pentester because it is not always easy to distinguish at which point in time was your 
scanner IP blocked. Also, introducing filter evasion techniques might as well introduce more false positives; 
increasing time between probes might not be an option due to time and scope restrictions.

The best I can think of to counter this is to manually test some important vulnerabilities (including some probable 
false negatives), by hitting directly previously identified services published to the Internet, that are known  to have 
several vulnerabilities (or at least those that have a high impact to business). In other words, emulate the behavior 
of an experienced hacker: avoid noise and shoot to kill. Most pentesters will consider these tests in their projects 
anyway :-)

Kind regards,

Omar Herrera


Current thread: