Nmap Development mailing list archives

Re: --min-parallelism sets max parallelism too


From: Brandon Enright <bmenrigh () ucsd edu>
Date: Sun, 2 Sep 2007 16:59:20 +0000

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Sun, 2 Sep 2007 10:20:54 -0500 plus or minus some time "Kris Katterjohn"
<katterjohn () gmail com> wrote:

On 9/2/07, David Fifield <david () bamsoftware com> wrote:

Well, there is also this line above that:

  if (max_parallelism && min_parallelism && (min_parallelism >
max_parallelism)) {
    fatal("--min-parallelism=%i must be less than or equal to
--max-parallelism=%i",min_parallelism,max_parallelism);

So if they are both set, and min > max, then it bails.

But otherwise, max = min because you need to have max >= min and (I
guess)
you can't really gauge how much the user wants for a max.

Right, but when I set --min-parallelism, I expect the parallelism to
actually be able to increase above the minimum.

We can just use the convention that 0 for min_parallelism and
max_parallelism means "unset". The other parts of the code already work
that way. ultra_scan, for example, uses 300 for the maximum congestion
window if o.max_parallelism is 0.

Or else we could just initialize the maximum to be something huge like
100000, like what is already done with o.max_host_group_sz.


Yeah, that's a good point.

Like I said, I only mainly use the timing templates and I wasn't sure how
everything works internally, but that looks like a really good idea.

Thanks,
Kris Katterjohn


I'll reply to Kris instead of myself.  It's makes me sick when people
have the audacity to reply to themselves *twice* in a row :p


Okay, now for the test results.  The base template is the same as before
(-T5 -P A135,139,445,3389 of almost 3 /16s).

Each test was a scan over a range of parallelism.  So if the min was X
and the max was Y, I ran three scans, one with with the min and max a X,
one X-Y and one Y-Y.  All three of these scans were started
simultaneously.

Here are the results:

256-512:
Nmap done: 186368 IP addresses (13022 hosts up) scanned in 475.591 seconds
            Raw packets sent: 1415051 (56.602MB) | Rcvd: 100151 (4.786MB)

Nmap done: 186368 IP addresses (13063 hosts up) scanned in 452.130 seconds
            Raw packets sent: 1416471 (56.659MB) | Rcvd: 97595 (4.655MB)

Nmap done: 186368 IP addresses (12558 hosts up) scanned in 370.796 seconds
            Raw packets sent: 1422085 (56.884MB) | Rcvd: 91728 (4.369MB)


768-1536:
Nmap done: 186368 IP addresses (12140 hosts up) scanned in 415.753 seconds
            Raw packets sent: 1427367 (57.095MB) | Rcvd: 233957 (10.992MB)

Nmap done: 186368 IP addresses (12466 hosts up) scanned in 407.300 seconds
            Raw packets sent: 1423963 (56.959MB) | Rcvd: 229936 (10.814MB)

Nmap done: 186368 IP addresses (12244 hosts up) scanned in 401.650 seconds
            Raw packets sent: 1423717 (56.949MB) | Rcvd: 244548 (11.471MB)


64-10240:
Nmap done: 186368 IP addresses (13541 hosts up) scanned in 1969.513 seconds
            Raw packets sent: 1404101 (56.164MB) | Rcvd: 348732 (16.266MB)

Nmap done: 186368 IP addresses (13537 hosts up) scanned in 1946.018 seconds
            Raw packets sent: 1404226 (56.169MB) | Rcvd: 341703 (15.944MB)

Nmap done: 186368 IP addresses (12552 hosts up) scanned in 329.449 seconds
            Raw packets sent: 1420471 (56.819MB) | Rcvd: 105506 (4.969MB)


Something strange happened in the 768-1536 tests.  Somehow they took longer
than the 256-512 and still found fewer hosts.  This is the same range and
behavior I saw this occur in my other tests that prompted me to drop the
range tests from the final results.  Before I tear my hair out running
hundreds of tests to figure this out, is there any reason in the code why
parallelism around 768-1536 would be worse than other values/ranges of
values?  Even the ridiculous parallelism of 10240 was faster and more
accurate.

Besides the 768-1536 test, the results speak for themselves.  I don't think
that setting --max-parallelism to 100000 or whatever is going to help much.


<crazy idea>
With the current algorithms and tuning in ultrascan() I think the Great
Line of Nmap Parallelism looks something like this:


1------------>64<--------------512-------------4096--------Crazy Numbers

Desire for speed --->  ~64  <--- Desire for accuracy

This is something like the momentum of solar wind versus the momentum of
interstellar space meeting at (and creating) the heliosphere.  Or in Linux
terms, '/proc/sys/vm/swappiness'.

In order for a range like 64-512 to really behave any faster than 64-64
there needs to be some way of tuning 'speediness'.  Something like 0 would
be "whoa, slow down! we might miss a response", 50 would be "just
cruzin, 64 parallelism should be good", and 99 would be "leave 'em behind,
we gotta finish NOW".

There are some potential problems with implementing this though.  It could
be hard to do.  It's rather fuzzy and hard to explain.  It could end up
being an unstable factor -- that is, one value will keep Nmap pinned at the
bottom of the range, just slightly higher though and it will keep Nmap
pinned at the top of the range.

Since I don't have a complete grasp of ultrascan() I could be barking up
the wrong tree.  Is this something that is worth more thought?
</crazy idea>


If there are any other parallelism combinations I should be testing let me
know and I'll run the tests.

Brandon

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQFG2uvoqaGPzAsl94IRAtPyAJ49tcst6nZnAVl+VpDuj59kYBOTaACgwyLP
cvrJb3X0GmvfraWKah8D8T4=
=2Jfk
-----END PGP SIGNATURE-----

_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://SecLists.Org


Current thread: