Bugtraq mailing list archives

Comments re: Vulnerability Testing


From: ashland () pobox com (tqbf)
Date: Fri, 12 Feb 1999 03:16:40 -0500


Hi. I'm the development lead on Network Associates' CyberCop Scanner
(formerly Secure Networks Ballista) product. My peers keep asking me why I
haven't gotten involved with the ISS discussion, so I am caving and
offering my take on the discussion regarding network scanner
implementations.

Those of you who read comp.security.unix probably remember a lengthy
discussion between myself (then of Secure Networks) and Craig Rowland
(then of WheelGroup) regarding banner checking versus "active probing"
versus conclusive testing. That discussion lasted forever and was somewhat
unproductive because me and Craig talked right past each other. Before I
discuss further, we should start agreeing on and using consistant
terminology. Let me introduce mine.

There are two basic means by which a scanner can test for the presence of
a vulnerability, "inference" and "verification". An "inference" test
attempts to predict whether a vulnerability is present without actually
confirming that it is; inference checks are fast and easy to write. A
"verification" test conclusively determines whether the vulnerability is
present, typically by actually attempting to excercize it. Verified tests
are slower, but almost always significantly more accurate.

In practice, scanners detect vulnerability in three ways:

"Banner checks" are inference tests that are based on software version
information. The term "banner check" is typically associated with
Sendmail security tests, which are easiest to write simply by grabbing the
ubiquitous Sendmail banner from the SMTP port and pulling the version out
of it.

"Active probing checks" are inference tests that are not based on
software version information; instead, an active probe "fingerprints" a
deployed piece of software in some manner, and attempts to match that
fingerprint to a set of security vulnerabilities. An active probe attempt
to find the Sendmail 8.6.12 overflow might, for instance, compare the
difference in "MAIL FROM... MAIL FROM..." errors between 8.6.12 and later
versions of Sendmail to determine what version is running.

"Exploit checks" are verification tests that are based specifically on the
software flaws that cause a vulnerability. An exploit check typically
involves exploiting the security flaw being tested for, but more finesse
can be applied to excercize the "bug" without excercizing the "hole". An
8.6.12 exploit test, as mentioned earlier in the thread, could be based on
the SMTP server's reaction to a large buffer of 'A's provided as command
input.

Opinion:

Banner checks are fastest. Active probes are fast, but difficult to
construct accurately. Exploit checks are slow, usually easier to write
than active probes, but harder to write than banner checks. Exploit checks
are significantly more reliable than banner checks and usually more
reliable than active probes.

The philosophy behind CyberCop Scanner:

If you claim to test for something, you must do so reliably. If a check
cannot be performed reliably, it shouldn't be performed at all, and, when
the market coerces us into performing bad checks, the unreliability of the
check should be disclaimed prominently.

In any case in which a vulnerability is tested for, unless there is an
extremely good reason not to, the test should be based on an exploit
check. The reason is obvious: speed is less important than accuracy.

People depend on the output of a scanner to secure their network. I'm
surprised that people defending ISS are emphatically denying that a
false negative result from the scanner is acceptable. It is awfully hard
to solve the "business problem" of securing a network without adequate
information. If you have to back your scanner results up with a manual
audit, what was the purpose of deploying the scanner in the first place?

The majority of CyberCop Scanner checks are exploit tests. We've
publically backed this up with numbers and check-lists in the past.

There are occasions in which exploit checks are unsuitable. In my
experience, these can be broken into two categories: situations in which
an exploit test will deny service on the network, and situations in which
a vulnerability is not generally exploitable on a network.

As we're all aware, many security problems cannot reliably be exploited
without disabling a service or machine in the process. When this is the
case, it is often impractical to test for it using verified exploit tests
(the scanner might leave a wake of downed mission-critical servers
behind). On these occasions, it is appropriate to utilize active probing
tests.

There are some vulnerabilities that simply cannot be tested for accurately
without utilizing exploit tests, even though the exploit will crash the
service. The Ballista team's response to that was to "red-flag" dangerous
tests and disable them by default. CyberCop Scanner will crash remote
hosts if all tests are enabled, but there are occasions (one-time audits
and post-mortems) in which the requirements for assurance outweigh the
need to avoid disruption.

Banner checks are almost always a flawed approach. There are three major
reasons for this. First, banners are configurable. It's awfully silly if
your scanner misses a remote root Sendmail vulnerability simply because an
admin had the foresight to remove the version advertisement from the
banner. Secondly, version numbers do not always map directly to security
vulnerabilities; particularly in open-source software, people often fix
problems locally with special-purpose patches prior to the "official" next
rev of the software.

The final reason why banner checks are flawed is that problems recur, and
occur across multiple incarnations of the same class of software. As Matt
Bishop points out in his discussion of secure programming, just because a
vendor fixes a problem in version 2.3 does not mean it will remain fixed
in 2.4. More importantly, just because the advisory says "wu-ftpd" doesn't
mean that the stock Berkeley ftp daemon, or Joe Router Vendor's home-brew
system, isn't vulnerable as well. If an exploit check will detect all
possible variants of a hole, and a banner check detects one specific
variant, it seems obvious that the exploit check is the right way to
approach the problem.

This has always been a major philosophical difference between Network
Associates' vulnerability analysis work and ISS's.

If our labelling and numbering systems were coherent, it'd be quite
interesting to compare detection methodologies between our two products.
I'm also curious as to how Netect fares. Axent's scanner seems to take an
interesting approach, which is to accept larges incidences of false
positives in exchange for broader chance of detection and faster scan
times (for instance, their CGI tests apparently stop after determining
whether a potentially vulnerable program is present, and report positive).

In any case, it's refreshing to see people compare scanners on accuracy
and methodology instead of numbers and gloss.

-----------------------------------------------------------------------------
Thomas H. Ptacek                          Network Security Research Team, NAI
-----------------------------------------------------------------------------
                                 "If you're so special, why aren't you dead?"



Current thread: