Nmap Development mailing list archives

RE: Overview on Nessus web app vulnerability scan


From: "Rob Nicholls" <robert () everythingeverything co uk>
Date: Sun, 26 Jul 2009 11:34:29 +0100

Hi Joao,

Here are some of my thoughts after reading your interesting ideas:

You mentioned only scanning "common directories" to improve performance.
Another (optional?) way to improve performance would be to check for case
insensitive web servers (e.g. IIS) as we can do a single check for
/phpmyadmin/ rather than /phpMyAdmin/ and /phpmyadmin/ and /PHPMyAdmin/.
This might be less useful if someone spoofs the server header though.

It may be useful to run the http-enum script before http-spider (or whatever
it's going to be called), with a way of passing the information across
(similar to how the smb-based scripts work?), but I suspect that's already
planned.

I would imagine Nessus' XSS detection involves sending a malicious request
and checking if it can be seen within the response (I've not sniffed the
traffic myself, yet). Other tools I've seen can use a unique attack (e.g.
<script>alert(461327);</script>, where the random number changes between
attacks) when submitting POST or GET requests in case the attack isn't
immediately returned, but is seen a few pages later (e.g. a shopping cart
might be vulnerable a page or two later on the confirmation page when
everything is displayed again). That would require extra work (and a lot
more memory?) to track the source URL and variable name against the unique
number and then checking every page for our attacks (presumably using a
regular expression to detect the attack and then compare the number to the
list of numbers).

I'm not sure we can avoid potentially changing the state of the web server,
as we don't know for sure that GET requests won't permanently affect the
output (they shouldn't, but you never know). Equally, most user input
nowadays tends to be sent as a POST request (especially ASP.NET content,
usually because the VIEWSTATE has to be returned), so avoiding POST requests
would severely limit the effectiveness of a scanner. But it might be
sensible to start off with a GET-based XSS scanner that only checks for
reflected XSS.

Talking of ASP.NET's VIEWSTATE, that might be another way to improve
performance, by decoding the VIEWSTATE information so we know which fields
cannot be manipulated (e.g. drop down lists), so we can focus on the ones we
can play with (typically date fields and strings).

Most of the automated web app tools I've used will struggle to detect/follow
JavaScript based URLs, so it'd be great if we can get it (mostly) working.

Rob



_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://SecLists.Org


Current thread: