Nmap Development mailing list archives

Re: Weird Crash - "WAITING_TO_RUNNING"


From: Nathan <nathan.stocks () gmail com>
Date: Mon, 15 Nov 2010 14:53:59 -0700

On Mon, Nov 15, 2010 at 1:56 PM, Nathan <nathan.stocks () gmail com> wrote:
On Mon, Nov 15, 2010 at 1:37 PM, Nathan <nathan.stocks () gmail com> wrote:
On Fri, Nov 12, 2010 at 11:51 PM, David Fifield <david () bamsoftware com> wrote:
On Tue, Nov 09, 2010 at 10:32:19AM -0800, David Fifield wrote:
On Mon, Nov 08, 2010 at 05:11:27PM -0500, Patrick Donnelly wrote:
On Fri, Nov 5, 2010 at 8:02 PM, David Fifield <david () bamsoftware com> wrote:
Putting aside the question of where all the open ports are coming from
(I don't think that's Nmap's fault), we should do something about Nmap
spawning too many scripts at once. I think it is the case that Nmap
allocates memory for all script threads at once and runs them all at
once.

NSE will create a thread (coroutine) (and reexecute script file
closure) for each host, for each open port of the host, for each
script. This can be potentially huge number but many of these are
garbage collected since the (port|host)rule returns false. For the
ones that had a rule function return true, we keep the thread and its
associated data. This can be very large when there are many ports open
and scripts (like the skypev2-version) have "lax" portrules.

We do run all the scripts at the start but the majority of them wait
while trying to connect. This adds some to the memory cost but not
much I would think (compared to setup, above).

We could put a limit on the number of scripts that run at once,
either doing them in batches like Nmap's host groups, or running up to a
certain number and then only starting a new thread after an existing one
has finished.

The "optimal" fix would be to create a generator that creates threads
on demand (most of the main function in nse_main.lua). Dependencies
(runlevels internally) would be a little tricky.

This is the same architecture (generator) I was thinking of.

Nathan, please try out this nse_main.lua. It's has a quick and dirty
modification that prevents the creation of more than 100 script threads
at a time. Run the scan so that it creates lots of spurious open ports
like before. It should not use up all your memory and should eventually
finish.

I think we will actually set the limit higher than 100 in practice.

David Fifield


Cool, a patch just for this!  So, I found nse_main.lua at
/usr/share/nmap/nse_main.lua and replaced it with your file.
Unfortunately, it died immediately:

$ /usr/bin/sudo /usr/bin/nmap -sS -sV -T4 -p 1-65535 74.62.92.70 -P0

Starting Nmap 5.35DC1 ( http://nmap.org ) at 2010-11-15 13:35 MST
NSE: failed to initialize the script engine:
could not load nse_main.lua: /usr/share/nmap/nse_main.lua:642: 'do'
expected near 'local'

QUITTING!

~ Nathan

Found it!

In the middle of the patch, it looks like there was a space missing:

+  while num_threads < CONCURRENCY_LIMITdo
+    local thread = threads_iter()

I changed it to

+  while num_threads < CONCURRENCY_LIMIT do
+    local thread = threads_iter()

...and now it's running.  We'll see how it turns out in awhile.

~ Nathan


Okay, it didn't change the accuracy (we didn't expect it too), so it
still thought all 65k+ ports were open.  But it certainly limited RAM
usage and actually finished!

It was using about 55MB RAM when it ended, and it took 5m23s -- a huge
improvement over using 4GB of RAM and crashing!

I hope this gets committed.  It certainly helps me in my situation -
now instead of having huge memory problems and crashes where I can't
find flags to fix the accuracy, I just get a lot of open ports in the
results -- which I can watch for and throw out.

Let me know if you'd like me to test any other versions of this -- I'd
be happy to do any testing to help this get committed.

~ Nathan
_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://seclists.org/nmap-dev/


Current thread: