Nmap Development mailing list archives

Re: Slow name-resolution of very large target list


From: Brandon Enright <bmenrigh () ucsd edu>
Date: Fri, 23 May 2008 02:24:03 +0000

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Doug,

I went ahead and decided to give Nuff a try.  Very impressive so far.
It's a little to aggressive against when I give it a list of more than
about 200 names but I'm happy to chunk my input.

While testing, I found an input list that causes Nuff to error out with:
Error: set-input-port: needs 1 argument(s)

I can't figure out what about the list is causing this problem.  When I
delete just a few random hosts out of the list it passes.  When I scan
just the first 200 or last 200 it passes.  I can reproduce the problem
but I can't figure out what variables cause it to pass or fail when I
adjust the input list.  I'm including the list so that you can take a
look.

Brandon


On Thu, 22 May 2008 18:37:24 -0700
doug () hcsw org wrote:

Hi Brandon,

On Thu, May 22, 2008 at 02:13:49PM -0700 or thereabouts, Fyodor wrote:
On Thu, May 22, 2008 at 07:16:31AM +0000, Brandon Enright wrote:
I've tried the scan from another network that has access to many
very fast local DNS servers and have specified them with
--dns-servers but that didn't seem to make any noticeable
difference.

Fyodor beat me to it--I was about to say almost the exact same thing.
The nmap DNS system is for reverse DNS only and other types of DNS
were never in the specs when nmap_dns was created. Like Fyodor says,
when you do need to do this, it is usually easiest to use a separate
program. I think the adns library has an example that does exactly
this. (?)

SHAMELESS PLUG: Nuff has a program that will do this for you, but
how well it can scale to millions of resolution, I don't know:

http://www.hcsw.org/nuff/code.html#section.auto.4.2.3

$ printf 'a.com\nb.com\n' | nuff resolve -stdin


Alternatively, there is a pattern I frequently use for doing things
in parallel. I call it fork/wait:

#!/usr/bin/perl
# fork/wait skeleton by frax (patent pending)

$parallelism = 10;

while(<>) {
  chomp;
  if (fork() == 0) {
    system("host -t A $_");
    exit;
  }
  $children++;
  if ($children >= $parallelism) {
    wait();
    $children--;
  }
}

while ($children > 0) {
  wait();
  $children--;
}


The above will do up to 10 resolutions in parallel. Use it like this:

cat domains.txt | perl pwnyresolv0r.pl > output.txt

The output should be like the following. It will require a little
post-processing and the results WILL NOT be in the same order as your
input file.

hcsw.org has address 65.98.116.106
slashdot.org has address 66.35.250.150
google.com has address 64.233.187.99
google.com has address 72.14.207.99
google.com has address 64.233.167.99

Note for doing millions of resolutions this may not be efficient
enough because every resolution spawns 3 (!) processes: Another perl
instance, the shell created by system(3), and an instance of host(1).
For short lived tasks like DNS resolution the process overhead might
be too much. However, fork/wait is a great way to parallelise many
types of IO bound programs.

Hope this helps,

Doug

PS. Be careful about locking your output stream. I think the above
code is OK because they will all be atomic writes, but if you're not
careful you can get things like:

hcsw.org hslashdot.org has address 66.35.250.150
as address 65.98.116.106
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (GNU/Linux)

iEYEARECAAYFAkg2KsoACgkQqaGPzAsl94JUFwCfX9myYrbgVOGpEHGddb0drJMf
5v0AoMFYU/3D2SrDJZ7tlizcMmTHTmlb
=tOzS
-----END PGP SIGNATURE-----

Attachment: crashlist.txt
Description:


_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://SecLists.Org

Current thread: