Nmap Development mailing list archives

Re: using previous XML as starting point


From: Daniel Miller <bonsaiviking () gmail com>
Date: Tue, 12 Aug 2014 07:57:49 -0500

On Tue, Aug 12, 2014 at 6:24 AM, Robin Wood <robin@digi.ninja> wrote:

I've just done a full port scan against a range of machines and got back a
set of results with mixed numbers of open ports. I'd now like to run the
default scripts against the ports that are open.


This is a fairly common request, so perhaps we should give it some
consideration. My concern in many cases is that it enables people to "put
on blinders" and ignore the fact that services come and go, and systems
should usually be re-scanned to discover new services. SunRPC services, for
instance, will often switch ports when they are restarted, since the
portmapper will always be able to find them.


If I go in with -sC and
-p- nmap will scan all ports again which is quite wasteful. The alternative
is for me to grep out all the open ports and pass the list in to -p which
is a bit of a pain and has me scanning closed ports on some of the machines
(i.e. 80 is only open on 4 of the 10 machines)


Compared to scanning 1000 ports/machine, this is probably much faster. I
wouldn't really consider it wasteful. Here's a 1-liner to grab the open
ports in a format that can be passed to -p:

xmlstarlet sel -t -m "//port[state/@state='open' or
state/@state='open|filtered']" -v "@portid" -n $NMAP_XML | perl -lne
'/./&&($x{$_}=1);END{$,=",";print keys%x}'

Getting a little crazy, here's one to grab UDP only (modify for TCP, then
run each and use like -p T:$TCP_PORTS,U:$UDP_PORTS):

xmlstarlet sel -t -m "//port[@protocol='udp' and (state/@state='open' or
state/@state='open|filtered')]" -v "@portid" -n $NMAP_XML | perl -lne
'/./&&($x{$_}=1);END{$,=",";print keys%x}'



It would be good if I could pass in the XML file the -p- scan generated and
tell the new scan to use that as a base for active machines and open ports.


We have a similar feature, --resume, that works with normal and grepable
outputs, but doesn't resume in the middle of scanning one host. Instead, it
picks up after the last completed hostgroup, since that's when output
happens. This can cause re-scanning of some hosts. What you are suggesting
is slightly related to a couple ideas that we have had:

1. Pipelined scans. Instead of doing hosts in batches, they are queued up
and pass through a pipeline of scan phases. As hosts leave, new hosts are
processed. This partially solves the --resume granularity issue, but still
leaves the granularity at the "host is completed" scale, which means the
feature couldn't be abused to do what you are asking.

2. NSE port scanning. If we allow NSE scripts to perform the port scanning
phase, they could do interesting things like parse a file of open ports
instead of actually contacting the host. Other possibilities here include
Shodan API, scans.io, SNMP queries, or ssh + netstat.


Would this be possible?


Anything is possible :)
But not right now. :(

Dan
_______________________________________________
Sent through the dev mailing list
http://nmap.org/mailman/listinfo/dev
Archived at http://seclists.org/nmap-dev/


Current thread: