Nmap Development mailing list archives
Re: Memory problem when scanning testrange
From: Brandon Enright <bmenrigh () ucsd edu>
Date: Tue, 19 May 2009 16:37:04 +0000
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 19 May 2009 14:15:00 +0200 or thereabouts Dieter Van der Stock <dietervds () gmail com> wrote:
Hello everyone, While trying to run an Nmap scan against a 10.10.0.0/17 range, the Nmap process is automatically killed because of too much memory usage.
I run into this pretty regularly. I just have to be careful how big the hostgroups I pick are and how many scans I run concurrently.
In syslog it says (after a whole output dump of memory stuff): kernel: Out of memory: kill process 28648 (bash) score 8319 or a child kernel: Killed process 28662 (nmap) The Nmap command run was: /usr/bin/nmap -T4 -n -p 1-65535 -oX largescan.xml 10.10.0.0/17
Well 65535 ports * 32768 hosts = 2147450880 ports Now, Nmap doesn't try to tackle it all at once, it does them in hostgroups which are generally <= 256
The version of Nmap being used: Nmap 4.85BETA9 Does anyone have any idea what can be done to prevent this? I suppose it's not an everyday-usage scenario of Nmap, but I'm basicly checking out how far I can push it :)
The usage scenario you have above in unrealistic for TCP but if you really want it to work add --max-hostgroup 32 to your scan. You didn't say how much free memory is available on your box or if it was a 64bit system like x86_64 (significantly increases memory usage) but I generally expect very large scans to use up between 4 and 6 GB of memory.
Cheers and with regards to you all, Dieter
Unfortunately the above usage scenario may not be terribly realistic for TCP but it is common for UDP. There is a TODO item to look at memory usage with large UDP scans but I don't think the problem is at all limited to UDP. I think TCP consumes just as much memory but that we don't notice it like we do with UDP because we don't do monster scans with TCP. Also note that even if you have enough memory to actually scan, the act of outputting the your scan results may double (or more) your memory usage while the string is being constructed in memory and written to the screen/a file. To fix this significant memory spike on output, we'd have to change entierly how output.cc and some of the supporting code works and is designed. If you really want to scan 64k ports on a /17 expect it to take about 24 hours with a average parallelism of 40 and using about 6GB of ram. Brandon -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.11 (GNU/Linux) iEYEARECAAYFAkoS4DYACgkQqaGPzAsl94JWvwCeI6CfQYU78Q1aSPg83HcvCFFA WnEAnAxXo0FPLGDYy/huXkX0cr/277Pa =nVoi -----END PGP SIGNATURE----- _______________________________________________ Sent through the nmap-dev mailing list http://cgi.insecure.org/mailman/listinfo/nmap-dev Archived at http://SecLists.Org
Current thread:
- Memory problem when scanning testrange Dieter Van der Stock (May 19)
- Re: Memory problem when scanning testrange Brandon Enright (May 19)
- Re: Memory problem when scanning testrange Fyodor (May 19)
- Re: Memory problem when scanning testrange Dieter Van der Stock (May 20)
- RE: Memory problem when scanning testrange Aaron Leininger (May 20)
- Re: Memory problem when scanning testrange Dieter Van der Stock (May 20)
- <Possible follow-ups>
- RE: Memory problem when scanning testrange Dieter Van der Stock (May 21)