Nmap Development mailing list archives

Re: [NSE] [patch] Big changes to http-enum.nse


From: Patrik Karlsson <patrik () cqure net>
Date: Sun, 17 Oct 2010 17:49:48 +0200


On 17 okt 2010, at 16.21, Ron wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Sun, 17 Oct 2010 11:05:30 +0200 Patrik Karlsson <patrik () cqure net> wrote:
Basically I want to be able to do:
/admin - Bubba|2 NAS administration web page
/webmail - Squirrelmail v1.2.3
/webmail - Outlook Web Access v2.3.4
/webmail - GroupWise Web Access v1.2.3
/wp - WordPress v1.2.3
/wordpress - Wordpress v1.2.3

Rather then:
/admin - Admin Directory
/web - Potentially interesting folder
Both of those are the goal, in my opinion. I'd like to both find interesting folders, but also fingerprint Web stuff. 
And, finally, find simple vulnerabilities. 


Anyway, In order to be able to do fingerprinting I came to the
conclusion that I wanted to split the probe and match parts to
resemble the service/version scan. Splitting the two makes it
possible to run a probe for eg. /admin and then have several match
lines determining what application is actually behind that url.
While the current design does allow this it involves doing a new
request for each match which will eventually become a problem as
the database grows.

Another way to solve this could of course be to keep the one line
layout and use the checkdir (url) as key in a table to make sure it's
requested only once and then run all matches against the result.
Ideally matching would stop if a match was found so that only a
single match would be reported.
That's true. We can easily use the same database format but ensure that each URI is only requested once by using a 
slightly different internal database. I started implementing that but I ran into problems with that. Right now, we 
let each fingerprint have a different verb, which means each URI:verb pair would only be requested once. But what if 
we add other fields later, like POSTDATA or content-type or others? We run into quite a pickle. 

True


Fortunately, we have caching with the HTTP library. That might be the best place for this to happen. The other option 
is to add some intelligence to http.pipeline() to recognize identical requests and simply copy the responses. 

The other option is to only combine simple GET requests. As soon as a fingerprint has a different verb or different 
anything, even if it matches another one, create a new entry. That would be the simplest logic and would be able to 
combine the fingerprints in 99% or more of cases (actually, 100% now since we only have GET). 

That sounds like a solution.


Any thoughts?

Well, sorry for being so problem oriented but I thought of one more thing.
With the current design it's easy and flexible to discover applications that user their default urls like eg. /mediawiki
However a webmail app may use an url like /webmail /mail or even /
In this case a matchline for eg. OWA would need to be duplicated for each alternative.
In order to address this I only see the option of separating the probes from the matches and first run all the probes 
then do all the matching.
Maybe I'm missing an obvious solution to this or trying to fit something into the script that is not supposed to go 
there.
 

Anyway, great work! I'm going to see if I can add some more entries
that do version matching to the database to try it out a bit more.
Great! Right now I only have my static links, because I simply converted the old database. If we can start getting 
some better entries, it'll be far cooler to show off. 

Ron
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.16 (GNU/Linux)

iEYEARECAAYFAky7Bm4ACgkQ2t2zxlt4g/QBgACdGnjzJA4EJcDaJs9MWDqXrIc4
ITEAoNCS07JwC3OXQXopurXcQ2jpRtBc
=RpLt
-----END PGP SIGNATURE-----


//Patrik
--
Patrik Karlsson
http://www.cqure.net
http://www.twitter.com/nevdull77





_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://seclists.org/nmap-dev/


Current thread: