Secure Coding mailing list archives

Resource limitation


From: petesh at indigo.ie (Pete Shanahan)
Date: Tue, 18 Jul 2006 13:04:59 +0100

leichter_jerrold at emc.com wrote:
I was recently looking at some code to do regular expression matching,
when it occurred to me that one can produce fairly small regular
expressions that require huge amounts of space and time.  There's
nothing in the slightest bit illegal about such regexp's - it's just
inherent in regular expressions that such things exist.


Been there, done that, watched computers go down again and again from this.

Or consider file compression formats.  Someone out there has a hand-
constructed zip file that corresponds to a file with more bytes than
there are particles in the universe.  Again, perfectly legal as it
stands.

Back in the old days, when users ran programs in their own processes and
operating systems actually bothered to have a model of resource usage
that they enforced, you could at least ensure that the user could only
hurt himself if handed such an object.  These days, OS's tend to ignore
resource issues - memory and time are, for most legitimate purposes,
"too cheap to meter" - and in any case this has long moved outside of
their visibility:  Clients are attaching to multi-thread servers, and
all the OS sees is the aggregate demand.


Most typical unix/linux environments contain aggregate resource meters - per
process limitations on resource usage.

Allocating huge amounts of memory in almost any multi-threaded app is
likely to cause problems.  Yes, the thread asking for the memory will
die - but unless the code is written very defensively, it stands a
good chance of bring down other threads, or the whole application,
along with it:  Memory is a global resource.

Ah, now this would be due to the standard definition of a thread. If you used
something more akin to light weight processes then you could isolate this
resource consumption problem a little bit better.

A thread is the basic unit of processing, it was never intended to be a unit of
resource consumption.


We recently hardened a network protocol against this kind of problem.
You could transfer arbitrary-sized strings over the link.  A string
was sent as a 4-byte length in bytes, followed by the actual data.
A request for 4 GB would fail quickly, breaking the connection.  But
a request for 2 GB might well succeed, starving the rest of the
application.  Worse, the API supports groups of requests - e.g.,
arguments to a function.  Even though the individual requests might
look reasonable, the sum of them could crash the application.  This
makes the hardened code more complex:  You can't just limit the
size of an individual request, you have to limit the total amount
of memory allocated in multiple requests.  Also, because in general
you don't know what the total will be ahead of time, you end up
having to be conservative, so that if a request gets right up close
to the limit, you won't cause the application problems.  (This, of
course, could cause the application *other* problems.)


Yes, and this falls into general application design. Most network protocols are
designed around the concept of front loading information into the stack. Every
level puts more information at the front, not at the end.
This means that you can make decisions based on a very small piece of data,
allowing you to quickly process it, or kill it should it causes you problems.

If you're allowing such huge data packets and you haven't got the back-end
system in place to process them quickly, and without resource starvation, then
you're just looking to shoot yourself in the foot.

Every system on the planet has had to deal with these problems. From fork-bombs
through to excess network connections. A lot of them can be prevented using
resource limits. Depending on the OS, you can limit resource usage by either
individual process, or group of processes (typically referred to as a task-group).

Should an operating system not provide you with integrated features to protect
you from these resource consumptions, then you can quite easily create
monitoring tools that are integrated into the application to monitor and prevent
these kinds of things.
Under an OS like Solaris, you could use a facility like dtrace to monitor
resource use from both the application and OS level to make resource allocation
decisions. This facility would not need to be integrated into the application.

the problem is that a lot of the resource decisions that are made with
applications are more dependent on the administrator rather than the application
developer. After all, while an application developer may say '10% of physical
memory left is OK', and administrator might say 'but what about that other
service there that needs 15%'.

Is anyone aware of any efforts to control these kinds of vulnerabili-
ties?  It's something that cries out for automation:  Getting it right
by hand is way too hard.  Traditional techniques - strong typing,
unavoidable checking of array bounds and such - may be required for a
more sophisticated approach, but they don't in and of themselves help:
One can exhaust resources with entirely "legal" requests.

In addition, the kinds of resources that you can exhaust this way is
broader than you'd first guess.  Memory is obvious; overrunning a thread
stack is perhaps less so.  (That will *usually* only affect the thread
in question, but not always.)  How about file descriptors?  File space?
Available transmission capacity for a variety of kinds of connections?


-- 
Pete    +353 (87) 412 9576 [M]
Just about every computer on the market today runs Unix, except the Mac
(and nobody cares about it).
                -- Bill Joy 6/21/85



Current thread: