WebApp Sec mailing list archives

Re: Of the three expensive vulnerability scanners


From: Jeremiah Grossman <jeremiah () whitehatsec com>
Date: Mon, 15 Nov 2004 12:56:46 -0800


Jim,

You mention an interesting and overlooked challenge with regard to web application security scanning. Whenever you scan custom web code, you are naturally going to be prone to an annoying false-positive rate. Most of us familiar, already understand this problem and why it happens.

But, there is also the OTHER annoying problem of "duplicate" vulnerabilities. Where the same issue is reported several times. This problem is difficult to resolve in scanning products and hard to articulate why it actually occurs in the first place. This issue is also not something we're used to in other scanners. I'll do my best at describing some reasons why...

The same vulnerable form shows up on multiple pages and is not treated uniquely. The problematic parameter name may show up across several CGI's (like a template name field). Perhaps the URL always looks slightly different to the scanner because it has some dynamically changing session values. The URL structure could also be non-standard and input points hard to isolate. The scanner product may not group vulnerabilities by application, parameter, and class. There are several other reasons as well.

As we know, web applications are quite a diverse type of software. Effective generic techniques for scanning are difficult to create and apply. The good news is all the scanners do seem to be getting significantly better over time.


Regards,

Jeremiah-




On Saturday, November 13, 2004, at 01:22  PM, Jim+Lisa Weiler wrote:

I've run WebInspect (WI, SPIDynamics) and AppScan (AS, Sanctum, now Watchfire) against 2 large ecommerce site code bases, written in IIS/ASP/VB and Apache/IBM Websphere - Websphere Commerce Server/JSP. Both products seem equally comprehensive. AS is faster and Watchfire says they do support better because they are bigger and have a support organization. I found both organizations to be responsive. The Watchfire folks didn't seem to know as much about the world outside their product as the WI folks did. Another dimension that I found very important to evaluate, is how useful the reporting is for planning remediation projects and actually fixing the problems. Both products might show 100 vulnerabilities. This might actually only involve 5 web pages because they count a vulnerability as a single failed test on a single field on 1 page. 5 failed XSS tests on the same field in the same page will count as 5 vulnerabilities. If the XSS tests are in different categories (HTML injection vrs unchecked parameter) they will be in different parts of the reports. AS is very poor at telling you 1. what are the different unique web pages (no repeated web pages) with a vulnerability
2. for each vulnerability type, what pages does it occur on

Just finding out exactly what different pages were involved took 2 hours looking at the AS report screen. They report the same page over and over again in different places. WI is somewhat better and I haven't seen a version later than 5 months ago.

The pie charts, executive reports and spreadsheet exports don't help planning the work to fix the problem.
----- Original Message ----- From: "Tom Stracener" <strace () gmail com>
To: <webappsec () securityfocus com>
Sent: Sunday, October 10, 2004 2:45 PM
Subject: Re: Of the three expensive vulnerability scanners


In-Reply-To: <20041007153115.28058.qmail () www securityfocus com>

Hi! I sought to answer this question for myself a while back, so hopefully you'll find my own experiences here useful. First, consider
the types of applications and the application environment you will be
securing. Depending upon the complexity of the web application you're
dealing with, your likely to get quick diminishing returns from the tools you have mentioned. Strong manual testing capabilities are a must, in my opinion, and sadly a lot of commercial apps fall short there.

When possible, you should contact the vendors and acquire a demo license in order to get a feel for how a tool actually performs. If that's not available, then you should sit down with the vendors and get a hands on session.

SPI Dynamics is very demo friendly. You'll find their people polite, professional, and quick to respond once you download the product. So if you want to take a look at it, just contact Natalie Hinkle <nhinkle () spidynamics com> if you have any questions or run into problems downloading it. Also, if you go this route be sure to download the SPI Toolkit, which includes some manual pen testing utilities.

With Sanctum, acquiring a demo was more difficult, I had to speak with
the salesperson's manager and then wait a few days, only to be declined. Only after sending an email to their VP Internal Sales together with my resume did I manged to get a demo. You may have better results. Jane Foulkes <jfoulkes () sanctuminc com> is a sales person you can contact over there.

Last I checked Scando did not have a demo available at all.

I would also strongly encourage you to contact Cenzic and discuss having a look at their up and coming version of Hailstorm 2.0. Its by far the most extensible of the available commercial offerings. The tool provides a nice balance of automated verses manual app spidering, allows you to record and replay complicated HTTP sessions (which they call traversals) and then you can apply different types of security policies as Hailstorm iteratively steps through the web application. You can also create your own policies and have full control over the fault injectors which interrogate the app, as well as types of response conditions you're interested in detecting. This tool shows an incredible amount of promise, so it would probably be in your interest to evaluate it. You can contact Mandeep Khera over there <mandeep () cenzic com> if you're interested finding out more about it.

Also, browse the recent archives of this list because your question
has surfaced in various forms and you'll be able to find a variety of
useful perspectives.

--Tom






Current thread: