Vulnerability Development mailing list archives

Controlling a program's resource usage on Unix


From: bernie () FANTASYFARM COM (Bernie Cosell)
Date: Sun, 16 Apr 2000 11:37:12 -0400


The recent thread on history logging reminded me of a little project I've
been working on for a while and I'm at a bit of an impasse:

What I'd like to do is be able to run an _arbitrary_ program and limit
what it can do.  The overall superstructure is fairly straightforward:
it'll be run in a 'no privileges' account, chrooted to a hierarchy that
doesn't include any block/char special inodes [except maybe /dev/tty] and
no setuid programs at all.  Within that environment, it is fairly easy to
'watch' that the program doesn't eat a lot of disk space and without any
root access there's no real way the program can 'break out' of its little
hierarchy or mess with the rest of the system .. if the 'launching'
process sees that the launched-program is misbehaving, it'll just
killpg -9 the whole mess.... so that's great...  BUT:

I don't know how [or even if it is possible!] to limit the _execution_
profile of the program.  /proc does give me some metrics on
processor/memory use by a proc and its children, but it looks like a
simple double-fork will defeat that [with the double-forked-children
inherited by init when the middle-proc exits and [AFAICT] untraceable].
Now, for my actual real-world application this is probably good enough
[I'm not trying to keep mailicious hackers from tanking our system, but
just trying to provide a "testbed" in which intended-to-be-well-behaved
programs can be run in a way that won't impact other stuff the server is
doing], but I've been wondering just how well I can *do*.

   /Bernie\

--
Bernie Cosell                     Fantasy Farm Fibers
mailto:bernie () fantasyfarm com     Pearisburg, VA
    -->  Too many people, too few sheep  <--



Current thread: