Bugtraq mailing list archives

Re: local users can panic linux kernel (was: SuSE syslogd advisory)


From: mbeattie () SABLE OX AC UK (Malcolm Beattie)
Date: Mon, 22 Nov 1999 11:14:01 +0000


Mixter writes:

The impact of the syslogd Denial Of Service vulnerability seems to
be bigger than expected. I found that syslog could not be stopped from
responding by one or a few connections, since it uses select() calls
to synchronously manage the connections to /dev/log. I made an attempt
with the attached test code, which makes about 2000 connects to syslog,
using multiple processes, and my system instantly died with the message:
'Kernel panic: can't push onto full stack'

I've been able to reproduce this as non-root user, although it had to
be done two times to overcome the stronger user resource limits, but
it worked. This has been tested with linux 2.0.38+syslog1.3 (redhat 5.2).

As a temporary fix, I'd strongly advise everyone who hasn't to set proper
user resource limits, but that is only a very temporary fix.

Taking a guess, I would say that the panic is caused by instability of
the linux select() implementation, and could therefore be abused in other
programs that manage an unlimited amount of connections using the select
syscall.

Why take a guess when it's so easy to grep the source?

    % cd /usr/src/linux
    % grep 'onto full stack' `find . -name '*.[ch]'`
    ./net/unix/garbage.c:           panic("can't push onto full stack");

So it's nothing to do with select() and everything to do with Unix
domain socket garbage collection. The comments at the top of that
file say:

     * 12/3/97 -- Flood
     * Internal stack is only allocated one page.  On systems with NR_FILE
     * > 1024, this makes it quite easy for a user-space program to open
     * a large number of AF_UNIX domain sockets, causing the garbage
     * collection routines to run up against the wall (and panic).
     * Changed the MAX_STACK to be associated to the system-wide open file
     * maximum, and use vmalloc() instead of get_free_page() [as more than
     * one page may be necessary].  As noted below, this should ideally be
     * done with a linked list.

In other words, your file-max is too low and users on your system
could exhaust system-wide resources in other ways too. Echo a larger
number into /proc/sys/kernel/file-max, e.g.
    echo 8192 > /proc/sys/kernel/file-max
(For those used to dealing with old BSD-ish kernels, this is similar
to panic() failure modes you get with maxusers is too low.)

If you can still reproduce the problem, I guess unix_gc() may need to
be fixed to use a more reliable upper bound for its stack size.

--Malcolm

--
Malcolm Beattie <mbeattie () sable ox ac uk>
Unix Systems Programmer
Oxford University Computing Services



Current thread: