Snort mailing list archives

Re: Run an external program


From: Bennett Todd <bet () rahul net>
Date: Wed, 5 Mar 2003 13:07:54 -0500

2003-03-05T12:14:48 Jack Whitsitt (jofny):
If you're writing to a FIFO, snort doesnt block while waiting for
the listening process to read in data.

I believe that is only true so long as the reader more or less keeps
up with the writer; I'm sure FIFOs will have some limited buffering.

I haven't used it in a fully-used 100Mbit pipe yet, [...]

The issue isn't the network speed so much as the alert rate. If you
aren't getting torrents of alerts, then the differences between
logging strategies aren't nearly as crucial.

[...] but it shouldn't take more of a performance hit than writing
a log to MySQL or alerting elsewhere.

It should be less hit than writing to MySQL, but I believe direct
database writes are the slowest snort config. High-performance
snort->database is done using the unified logfile format and
Barnyard, again to decouple the (slow) writing from snort.

If you have a listening client already loaded in memory, it can
act directly on the signals without spending time loading and
reinitializing.

Yes, although the context switch can become significant (especially
on Unixes other than Linux where context switching is more
expensive). But the topic thread is executing an external program
when there's an alert, and even if the program-executer is
persistent that'll still be slower.

It also is a bit faster than constantly checking logs to see if
they've changed.

Agreed, blocking on a read is more efficient and faster than polling
the file. However, current logfile tailers (e.g. built on the perl
File::Tail module) do extraordinarily well, adapting the polling
interval to the observed file update frequency, while keeping the
system overhead of the polling as modest as possible.

It all comes down to this: what do you want your snort to do when
there's a deluge of alerts --- say, fantasize hundreds or thousands
of alerts per second in a fast burst. Do you want snort to stall and
stop looking at traffic, operating approximately synchronously with
your fork-n-exec external-program-invoking alert processor, with
only the limited buffering available from a pipe or fifo or
whatever? Or do you want snort to continue doing its best to keep up
with the data, and let the log alert processor keep up or fall
behind as it may, doing its best, with a logfile's huge buffering
(limited only by available disk space) decoupling snort from the
alert processing? Either answer may be preferable for some specific
application.

A loosely similar concept applies to modern syslog forwarding
designs. Using traditional (UDP transported) syslog, you may lose
messages during an overload, but the writer will never stall. Newer
TCP-based syslog transports offer far better behavior in avoiding
message loss, at the expense of putting the writer at the mercy of
the reader.

-Bennett

Attachment: _bin
Description:


Current thread: