WebApp Sec mailing list archives

RE: Of the three expensive vulnerability scanners


From: "Mark Curphey" <mark () curphey com>
Date: Tue, 23 Nov 2004 07:33:04 -0500

Hacme Bank - 100% of the issues in the bank are taken from real world
assessments done at Foundstone. Why do you feel that is not a good "test"
application? I am not debating one way or another but interested why you
don't think it's a good test site and more interested in what you think
would be...we are happy to build it ;-)

Your comment below here also caught my eye.

".....then the only way really is to code review a production site and work
out the vulns manually first then see how many the scanner detects. "

By this I think you are implying that the only effective way to test the
application for vulns is to perform a manual code review and then compare
the app scanner (black box http scanner) against those results? If that's
the case aren't you  implying that the most efficient way to test is code
review (with full knowledge and full access to the system is my very firm
belief btw) and then see how good another technique compared to these
results? If this is what you were implying why not just use the more
efficient method anyways?

-----Original Message-----
From: King, Stuart (REHQ-LON) [mailto:Stuart.King () reedelsevier com] 
Sent: Monday, November 22, 2004 11:32 AM
To: Adam Shostack; webappsec () securityfocus com
Subject: RE: Of the three expensive vulnerability scanners

There's a lot of criticism in this group of the vulnerability scanners - and
no they are by no means a panacea for application security - but they ARE
useful. I also argue that they are getting better (I regularly make use of
one of "big 3").

What I will state is that used by themselves by a person with no other
skills in pen testing web applications that they are next to useless - but
then I would argue the same for tools such as Nessus et al. It's how you
interpret and report the results and how you focus attention on the parts of
the application which the scanners are not yet mature enough to reach. 

I would also not recommend using (excellent) training tools such as Hackme
for testing the effectiveness of the scanners - it is not coded as any live
application would be. If you want to baseline, then the only way really is
to code review a production site and work out the vulns manually first then
see how many the scanner detects. 

Vulnerability scanners should be just one of an arsenal of tools for the
good security tester. The best tool of all is the mark one eyeball..

-----Original Message-----
From: Adam Shostack [mailto:adam () homeport org]
Sent: 21 November 2004 19:47
To: ban.marketing.bs () hushmail com
Cc: webappsec () securityfocus com
Subject: Re: Of the three expensive vulnerability scanners

I know of companies that deploy millions of lines of new code annually.
(Both in house and outsourced code)  Deciding what to have an expert look at
is hard and slow.  Adding any automation makes their experts more effective.

So you need do decide between static testing, dynamic testing, or some mix.
Static testing is very good at finding some things, but not others.  It
finds strcats, but doesn't find a lack of authentication.
(I like to think of these as sins of commission vs. sins of
ommission.)

I'm not going to argue for or against the commercial dynamic test tools...I
just don't know enough about them.  But dynamic testing is not fundamentally
flawed, its a potentially useful part of a toolset.
Would you not nmap and nikto boxes before they go out, just as a sanity
check?

Adam


On Tue, Nov 16, 2004 at 06:14:28PM -0800, ban.marketing.bs () hushmail com
wrote:
| OK what am I missing here? Why use a fundamentlaly floored technique 
| for finding the issue? Why not look at the source? Its pretty damn 
| obvious where you are reading or writing unvalidated data....please 
| please no "source is not always available"
| junk.....this is the web and 99% your looking at bespoke apps. You 
| have to ask or educate the client at worse.
| 
| Its about time the industry started taking software security seriously 
| and continuing down this futile route of refining pen testing 
| techniques to make up for the obvious limitations of this technique is 
| not it IMHO.
| 
| Newsflash - Most serious XSS issues in the real world are stored not 
| refelcted and unless you can trace data to the reflection point this 
| technique will NEVER find them !
| 
| 
| 
| In-Reply-To: <003801c4c9c6$e5f39530$8d8606d1@rockstar>
| 
| Jim,
| 
| The problems you've mentioned with regard to the Cross Site Scripting 
| tests point to a functionality area where the major players in the App 
| security market need major improvement. As Jeremiah pointed out, the 
| problem is broader than XSS policies alone, but it certainly affects 
| them.
| 
| One reason the XSS policies yield diminishing returns and are poorly 
| organized in reports is due in part I believe to a lack of proper 
| detection mechanisms. Both products use a plethora of fault injection 
| techniques, yet neither seems sensitive to whether or not the injected 
| script is returned within the context of the app's response in a form 
| that is executable by a browser. As a result, when one form field is 
| vulnerable to XSS, you can get into situations where virtually every 
| XSS test returns with a positive detection.
| 
| As you've no doubt noticed, each product checks for various kinds of 
| XSS, some of these kinds are distinguished on the basis of the 
| delimiter that is used. Despite the technical differences, each 
| delimiter type has a sophisticated name (i.e Double Quote Single Quote 
| Bracket kung fu,
| etc.)
| 
| ">&lt;script ....
| '>&lt;script ....
| ">">&lt;script ...
| <--&lt;script ...
| <textarea>&lt;script ...
| etc.
| 
| While the main vulnerability condition is whether or not an 
| application will "echo back" the script sequences, real problem is 
| that the different delimiters are important because some will execute 
| when returned by the application, and others will not, depending upon 
| the HTML/Script code of the application. This is why it is important 
| to audit the application's logic, but there really is no reason to 
| test for 12 different types of cross site scripting scenarios using 
| different delimiters and script types if the detection mechanism can't 
| account for which sequences actually yield results that are 
| executable.
| 
| The optimal solution in my opinion would be to emulate a browser and 
| trap for alerts (or other events) and then to organize the report data 
| based on which delimiters successfully generated the desired pop-ups 
| (or whatever event is trapped for). The rest could be classified as 
| warnings.
| This would
| help to minimize the multiple alerting problems that plague the XSS 
| tests and produce frequently confusing results. While this wouldn't 
| fix the reporting problems, it would help to attenuate the signal.
| 
| -tom
| 
| 
| 
| 
| 
| Concerned about your privacy? Follow this link to get secure FREE 
| email: http://www.hushmail.com/?l=2
| 
| Free, ultra-private instant messaging with Hush Messenger
| http://www.hushmail.com/services-messenger?l=434
| 
| Promote security and make money with the Hushmail Affiliate Program: 
| http://www.hushmail.com/about-affiliate?l=427


Current thread: