Snort mailing list archives

Re: Snort and high-traffic lines


From: Jens Krabbenhoeft <tschenz-snort-users () noris net>
Date: Wed, 2 Oct 2002 20:32:07 +0200

Hi Gary, hi list,

I don't think multiple instances of snort on the same box
are going to help you much. If you're using a promiscuous

I tested running snort on the dual-pIII-box after I wrote the mail. I 
split up the ruleset into two parts - about 700 rules each. It seemed to
work quite well, although I hadn't had time to test it against the
real-life traffic (the 70MBps I'm speaking about all the time *g*). The 
test was done by snorting on lo and issuing a "ping -f -s 10000 
localhost", which results in about 550MBit/s on that box (and wasting
99% of one CPU for the ping -f).

The snort with the matching "Traffic from 127.0.0.1 -> 127.0.0.1" had
drops - but I think thats because it was alerting every packet (and thus
generated about 2.5GB of unified logfiles in a short period *g*). The
other snort instance didn't even drop a packet. I will give that setup a
try on monday - and will let you know.

The problem arising from this are duplicate alerts from the
preprocessors and the rules:

As both snorts see the same traffic, the preprocessors will alert on
the same packet twice - but this seems to be a solvable task. The
ruleset-split can result in two alerts (instead of one) for the same
packet, as the packets can have two chances to be the first in the
"first match first go"-game.

switch port like Cisco's SPAN, you could probably attach a
100Mb *hub* to the port and attach multiple commodity machines
each running snort with a subset of the rules. I've been

Sure I could. But for me it would be the better way, to have a small 
number of sensors which snort more segments, than to have more sensors 
sharing one segment. So I will try to get that "1 sensors, heavy 
traffic"-thing to work. 

the false detections are going to be impractical. For such a 
network I think it is necessary to get rid of rules that are 
ineffective. If you don't do it in the collection process, you'll 
end up doing it in the examination process anyway. The one 

Sure. But when ending up with them in the examination process (and
having a good frontend to the alert-database) you have a more general
setup - and that's what I tried to build, because the network/s I try to
snort are very heterogeneous. So you can't just say "no IIS -> no IIS
rules" and that would be the most effective weeding of rules.

I will surely not end up with about 1400 rules - I think it will be
about 500 or so. But as I try to gather performance-information of snort
at the moment I am trying to get a solution to get high amounts of
traffic processed with the highest amount of rules.

outgoing which will tell you when one of your servers is infected 
indicating a successful intrusion.

Sure. Many of the rules coming with snort will be rewritten to see
successfull compromitation. 

I'll grant you, weeding out the ineffective rules is a tedious 
process and not without some risk...but what isn't? :)

I will weed out the ruleset - I promise ;). But for now I'm trying to
get snort to snort as much traffic with that kind of ruleset. Because
when having the experience, how many rules you can snort, at with
bandwidth on a given hardware can *IMHO* help much to know what IDS to
deploy for what kind of setup.

The portscan or stream preprocessors seem to take significant
CPU when large numbers of hosts are involved. I haven't figured
out how to deal with that yet.

I disabled the portscan-preprocessor already. This helps much, and as
I'm not a fan of "portscans are evil"-theory I don't have a big problem
with that :)

BTW: For all guys wondering at why I'm into having statistics and things
about snort performance - I'm writing my diploma thesis on this topic at
the moment and try to evaluate what bandwidth/ruleset/hardware would be
workable. After that evaluation period I will try to cut down the rule
quantity, etc. and when finally implementing the IDS I will hopefully
don't have performance problems anymore.

Thanks for your feedback Gary, thanks for all other feedback from the
mailinglist - and thanks in advance for upcoming feedback ;).

Kind regards,

        Jens


-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe:
https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:
http://www.geocrawler.com/redir-sf.php3?list=snort-users


Current thread: