Nmap Development mailing list archives

Re: [RFC] Basing timeouts in NSE on host.times.timeout


From: Kris Katterjohn <katterjohn () gmail com>
Date: Fri, 01 Aug 2014 12:28:39 -0500

Hi Daniel, Patrick,

On 08/01/2014 11:48 AM, Daniel Miller wrote:
On Fri, Aug 1, 2014 at 10:12 AM, Patrick Donnelly <batrick () batbytes com>
wrote:
I really like the idea. My only concern is that perhaps
host.times.timeout may be too exact and perhaps we should add a few
seconds? Or does it already include some wiggle-room?

host.times.timeout is what Nmap uses during the portscan phase, and it's
based on the round-trip time and variance. Specifically,

timeout = srtt + ( 4 * rtt_variance)

So there is some wiggle room. "srtt" is the smoothed-average RTT over the
course of the scan, weighted towards more-recent results.
<snip>
Adding a few seconds makes sense, especially in some special cases where
some host-processing time may be required (e.g. UDP packet sent, target
processes for X seconds, target sends UDP response). It is ultimately up to
the script author.

I'm surprised more scripts haven't been using host.times.timeout.  I
added the "times" table to the "host" table for just this sort of thing...

My raw IP scripts ipidseq, qscan and path-mtu use host.times.timeout in
the pcap timeout.  As described in the comments for qscan and path-mtu I
multiply 1000*host.times.timeout by 2 and 1.5, respectively.  This is
primarily because of port forwarding and differing packet sizes,
respectively.  I didn't use an additional factor in ipidseq.  Another
script, snmp-brute, seems to multiply by 3.

Perhaps in some cases something like this is better than simply adding
some number of seconds to the timeout.

Cheers,
Kris Katterjohn
_______________________________________________
Sent through the dev mailing list
http://nmap.org/mailman/listinfo/dev
Archived at http://seclists.org/nmap-dev/


Current thread: