Nmap Development mailing list archives

Re: [NSE] Resource Cleanup Upon Thread Death


From: David Fifield <david () bamsoftware com>
Date: Tue, 2 Jun 2009 16:41:19 -0600

On Mon, Jun 01, 2009 at 11:11:26PM -0600, Patrick Donnelly wrote:
Attached is a patch to allow the cleanup of resources when a thread
dies for any reason (including normally). The current and only
resource we have in mind for this is mutexes. When a thread ends, any
mutex locked shall be unlocked immediately allowing for other threads
to avoid deadlock.

Feel free to test this (an example script is attached also) and report
what you think.

I think this patch is good and should be committed, but first I ahve
just a few questions about the code.

Index: nse_nmaplib.cc
===================================================================
--- nse_nmaplib.cc    (revision 13460)
+++ nse_nmaplib.cc    (working copy)
@@ -226,6 +226,9 @@
       {
         lua_pushthread(L);
         lua_replace(L, lua_upvalueindex(2)); // set running
+        lua_pushvalue(L, lua_upvalueindex(3)); // unique identifier
+        lua_pushvalue(L, lua_upvalueindex(4)); // aux_mutex_done closure
+        nse_destructor(L, 'a');
         return 0;
       }
       lua_pushthread(L);
@@ -235,6 +238,10 @@
       lua_pushthread(L);
       if (!lua_equal(L, -1, lua_upvalueindex(2)))
         luaL_error(L, "%s", "Do not have a lock on this mutex");
+      /* remove destructor */
+      lua_pushvalue(L, lua_upvalueindex(3));
+      nse_destructor(L, 'r');
+      /* set new thread to lock the mutex */
       lua_getfield(L, LUA_REGISTRYINDEX, "_LOADED");
       lua_getfield(L, -1, "table");
       lua_getfield(L, -1, "remove");
@@ -242,10 +249,14 @@
       lua_pushinteger(L, 1);
       lua_call(L, 2, 1);
       lua_replace(L, lua_upvalueindex(2));
-      if (!lua_isnil(L, lua_upvalueindex(2))) // waiting threads had a thread
+      if (lua_isthread(L, lua_upvalueindex(2))) // waiting threads had a thread
       {
-        assert(lua_isthread(L, lua_upvalueindex(2)));
-        nse_restore(lua_tothread(L, lua_upvalueindex(2)), 0);
+        lua_State *thread = lua_tothread(L, lua_upvalueindex(2));
+        lua_pushvalue(L, lua_upvalueindex(3)); // destructor key
+        lua_pushvalue(L, lua_upvalueindex(4)); // destructor
+        lua_xmove(L, thread, 2);
+        nse_destructor(thread, 'a');
+        nse_restore(thread, 0);

I am having trouble keeping track of the upvalue indices associated with
a thread. I would like you to add a big block comment describing what
each index contains.

+  function Thread:close ()
+    local ch = self.close_handlers;
+    for i = #ch, 1, -1 do
+      ch[i].destructor(ch[i].thread, i);
+      ch[i] = nil;
+    end
+    for key, destructor_t in pairs(ch) do
+      destructor_t.destructor(destructor_t.thread, key);
+      ch[key] = nil;
+    end
+  end

Please explain what this bit of code is doing and why there are two
loops. It looks like you are calling all the integer-keyed destructors
in reverse order, then all the destructors with other keys in whatever
order they appear. What is the structure of close_handlers? Is it
possible to enforce integer keys only and simply this?

@@ -396,6 +412,7 @@
   -- nse_restore, waiting threads become pending and later are moved all
   -- at once back to running.
   local running, waiting, pending = {}, {}, {}
+  local all = setmetatable({}, {__mode = "kv"}); -- base coroutine to Thread
   -- hosts maps a host to a list of threads for that host.
   local hosts, total = {}, 0
   local current
@@ -406,19 +423,37 @@
     local thread = remove(threads);
     thread:d("Starting %THREAD against %s%s.", thread.host.ip,
         thread.port and ":"..thread.port.number or "");
-    running[thread.co], total = thread, total + 1;
+    all[thread.co], running[thread.co], total = thread, thread, total+1;
     hosts[thread.host] = hosts[thread.host] or {};
     hosts[thread.host][thread.co] = true;
   end

Is the new "all" table to facilitate your planned change to allow a
script thread to start its own coroutines?

David Fifield

_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://SecLists.Org


Current thread: