Nmap Development mailing list archives

Re: Memory problem when scanning testrange


From: Fyodor <fyodor () insecure org>
Date: Tue, 19 May 2009 11:57:47 -0700

On Tue, May 19, 2009 at 02:15:00PM +0200, Dieter Van der Stock wrote:

While trying to run an Nmap scan against a 10.10.0.0/17 range, the Nmap
process is automatically killed because of too much memory usage.

In syslog it says (after a whole output dump of memory stuff):
kernel: Out of memory: kill process 28648 (bash) score 8319 or a child
kernel: Killed process 28662 (nmap)

The Nmap command run was:
/usr/bin/nmap -T4 -n -p 1-65535 -oX largescan.xml 10.10.0.0/17

The version of Nmap being used: Nmap 4.85BETA9

Does anyone have any idea what can be done to prevent this?
I suppose it's not an everyday-usage scenario of Nmap, but I'm basicly
checking out how far I can push it :)

Hi Dieter.  How much RAM do you have on the system?  How much of it is
free (not used by all the other applications running) before you start
Nmap?  How much is Nmap using when you look at it in top or the like?
What does the growth look like?  Does it start out more reasonable and
then over the hours/days of the scan continue growing more and more?
Your command is not unreasonable, and we should make sure that Nmap
does not use an unreasonable amount of memory in that case.  For
example, we could reduce the default host group size when so many
ports are being scanned.  Or maybe there is a memory leak we can fix,
or in-memory structures we can optimize.

Cheers,
-F

_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://SecLists.Org


Current thread: