Nmap Development mailing list archives

Re: massping-migration and other dev testing results


From: David Fifield <david () bamsoftware com>
Date: Mon, 17 Sep 2007 15:00:34 -0600

On Sat, Sep 15, 2007 at 06:09:13AM +0000, Brandon Enright wrote:
I did this scan with MPM r5829 twice, sequentially, with no other network
traffic or CPU load on the box.  Once with T3 and once with T5.

$ egrep -i 'pcap stats' david_mpm_r5829bT3.nmap
pcap stats: 131 packets received by filter, 0 dropped by kernel.
pcap stats: 18 packets received by filter, 0 dropped by kernel.
pcap stats: 44 packets received by filter, 0 dropped by kernel.
...


$ egrep -i 'pcap stats' david_mpm_r5829bT5.nmap
pcap stats: 138 packets received by filter, 0 dropped by kernel.
pcap stats: 18 packets received by filter, 0 dropped by kernel.
pcap stats: 43 packets received by filter, 0 dropped by kernel.
...
pcap stats: 110 packets received by filter, 0 dropped by kernel.
pcap stats: 152 packets received by filter, 0 dropped by kernel.
pcap stats: 2572 packets received by filter, 642 dropped by kernel.
pcap stats: 712 packets received by filter, 0 dropped by kernel.
pcap stats: 148 packets received by filter, 0 dropped by kernel.
...
pcap stats: 59 packets received by filter, 0 dropped by kernel.

Other than the one drop spike, everything went fine.  Is there any way to
figure out why the kernel would drop received packets?  My best guess would
be that there is a pretty short buffer for incoming packets and if too many
probes are sent at once, when responses come back, if Nmap takes to long to
read them from the buffer, packets will be dropped.  That would help
explain why with the 64k PING_GROUP_SZ the kernel was dropping like crazy
- -- Nmap was spending too much time sending and the latency to getting to the
buffer read was too high.  Or maybe it's something completely different.
Thoughts?

How interesting. I wonder if those 642 packets dropped during one round
could account for all 282 missing hosts.

Your guess as to why packets are being dropped is mine too. Packets come
in fast and the state machine doesn't move into the packet reading state
fast enough.

I had put in a change that scaled congestion window increments by up to
a factor of 8 in -T4 and -T5 (but with your four probes per host it
would scale by only two). I did this before I implemented the scaling
based on the packet receipt ratio, but it's still in effect. I think
I'll take it out now that we have a more general solution to the
problem. Further scaling the window is being greedy and might be
contributing to the drops. Then I'll merge back into the trunk.

David Fifield

_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://SecLists.Org


Current thread: