Snort mailing list archives

RE: snort behavior in very high-load environment, BSD vs. linux


From: "Abe L. Getchell" <abegetchell () qx net>
Date: Wed, 31 Jul 2002 12:02:43 -0400

Hi Adam,
        I personally _would not_ try to tackle that kind of volume with
a single sensor.  Try running the traffic through a TopLayer switch
(http://www.toplayer.com/ - no I don't work for 'em, just have had great
luck with their products) and load balancing it across an array of
sensors.  This will not only give you the ability to inspect the volume
of traffic you're handling without the scalability issues you're
encountering, but will also give you the redundancy you probably need in
your environment.  If you take this approach, it won't matter which OS
will perform better at high loads because each sensor will easily be
able to handle the amount of traffic you're throwing at it.  You can
then focus your OS decision on more important factors like which OS you
can more efficiently manage and support within your organization.  You
also won't have to obsess about which chipset the system board in your
sensor is using and the amount of cache that being utilized on your RAID
controller.  Trust me, I've been there, and it's not fun. =)  You need
more raw processing power or system bus throughput?  Plug another sensor
into the switch and put it in the CC group.  Simple and effective.
        Since cost is a concern, check out TopLayer's new IDS Load
Balancer.  It's a functionally scaled-down version of their full
featured AppSwitch that just does the IDS load-balancing piece.  It's
considerably less expensive than it's big brother, and for what you're
trying to accomplish, will be functionally identical.  Nortel's Alteon
Web Switch also does a good job at load balancing traffic across an
array of IDS's, but doesn't have some of the cool features that the
TopLayer does, and is considerably more expensive.
        Hope this helps...

Thanks,
Abe

--
Abe L. Getchell
Security Engineer
abegetchell () qx net

-----Original Message-----
From: snort-users-admin () lists sourceforge net 
[mailto:snort-users-admin () lists sourceforge net] On Behalf Of 
Adam D'Amico
Sent: Tuesday, July 30, 2002 6:43 PM
To: snort-users () lists sourceforge net
Subject: [Snort-users] snort behavior in very high-load 
environment, BSD vs. linux


Hello,

I've been working with snort for a while now in an 
environment that seems to be on the bleeding edge of what 
should be snortable.  I've gotten predictable results in some 
spots and weirdness in others. I thought I would share my 
results with everyone here, in the hope that someone might 
get use out of them, and maybe even have decent explanations 
for the weirdness.  I've read through a lot of the previous 
threads having to do with packet loss and system tuning, but 
not much of it was applicable, given the network environment 
I'm running in.

I've got two identical boxes with two identical gig-e feeds 
running into them.  The hardware is dual P3 1.26GHz, 1GB RAM, 
plenty of 7200rpm IDE disk, and Intel pro1000-T adapters.  On 
the copper there is a steady stream from our backbone of 
around 55-70kpps, weighing in at between 300-400Mbps, 
depending on time of day.  The traffic is not filtered or 
firewalled in any way, so we see everything you could 
possibly imagine, including tens of thousands of attacks and 
exploit attempts daily.

Here's where it gets interesting... I'm running FreeBSD 4.6 
on one of the boxes, and RedHat 7.3 on the other.  I've 
optimized the 2.4 linux kernel to some degree for packet 
sniffing, but didn't really know how to do that for the BSD kernel.

In trying to decide which OS to keep as the production 
environment for the IDS, I've been racing the systems against 
one another with identical traffic streams and identical 
snort 1.8.7 configurations.

Now, I've read/heard for a long time that the BSD packet 
capture ability is far better than that in linux, even with 
the new 2.4 optimizations.  I was hoping to confirm or deny 
that with some of the tests I ran, but I wasn't able to 
interpret the results quite that way, mostly because of my 
own ignorance on the finer mechanics of kernel behavior.

So on to the numbers, starting with basic stuff.  Running 
tcpdump -qni eth2 > /dev/null on both boxes for the same 
10-second period gives output like this:

linux:          665723 packets received by filter
        0 packets dropped by kernel
FreeBSD:  673301 packets received by filter
        31367 packets dropped by kernel

OK, so this seems to support the assertion that BSD is better 
at sniffing, but there also seems to be some skew between 
what each OS calls a dropped packet.

Running snort overnight with a command line like
snort -c snort.conf -i eth2 -b -A fast
gave output like this:

linux: 
==============================================================
==============
===
Snort analyzed 1793770624 out of 1820533437 packets, The 
kernel dropped
0(0.000%) packets

Breakdown by protocol:                Action Stats:
    TCP: 1737510100 (95.440%)         ALERTS: 25663
    UDP: 46480933   (2.553%)          LOGGED: 25663
   ICMP: 5117982    (0.281%)          PASSED: 0
    ARP: 6745       (0.000%)
   IPv6: 0          (0.000%)
    IPX: 0          (0.000%)
  OTHER: 4654819    (0.256%)
DISCARD: 1          (0.000%)
==============================================================
==============
===
FreeBSD: 
==============================================================
==============
===
Snort analyzed 1546382464 out of 1825950223 packets, The 
kernel dropped
258626839(14.164%) packets

Breakdown by protocol:                Action Stats:
    TCP: 1495941340 (81.927%)         ALERTS: 23199
    UDP: 41551209   (2.276%)          LOGGED: 23199
   ICMP: 4684537    (0.257%)          PASSED: 0
    ARP: 6226       (0.000%)
   IPv6: 0          (0.000%)
    IPX: 0          (0.000%)
  OTHER: 4199210    (0.230%)
DISCARD: 1          (0.000%)
==============================================================
==============
===

Again, skew between definitions of packet drop?  But more 
importantly, even though BSD has appeared to capture packets 
better, should I trust linux for snort since it picked up 
more alerts?  And why did linux claim such a higher 
percentage of TCP packets?

Thinking I could speed up snort's performance by not logging 
any packets, I tried snort -c snort.conf -i eth2 -N -A fast and got:

linux: 
==============================================================
==============
===
Snort analyzed -906323712 out of -843192947 packets, The 
kernel dropped
0(0.000%) packets

Breakdown by protocol:                Action Stats:
    TCP: -1027885735 (94.649%)         ALERTS: 304136
    UDP: 99500448   (2.883%)          LOGGED: 304136
   ICMP: 8956378    (0.259%)          PASSED: 0
    ARP: 10166      (0.000%)
   IPv6: 0          (0.000%)
    IPX: 0          (0.000%)
  OTHER: 13095090   (0.379%)
DISCARD: 0          (0.000%)
==============================================================
==============
===
FreeBSD: 
==============================================================
==============
===
Snort analyzed -1634877952 out of -796656885 packets, The 
kernel dropped
793215694(22.674%) packets

Breakdown by protocol:                Action Stats:
    TCP: -1735488330 (73.163%)         ALERTS: 275094
    UDP: 82257816   (2.351%)          LOGGED: 275094
   ICMP: 7711814    (0.220%)          PASSED: 0
    ARP: 8948       (0.000%)
   IPv6: 0          (0.000%)
    IPX: 0          (0.000%)
  OTHER: 10631823   (0.304%)
DISCARD: 0          (0.000%)
==============================================================
==============
===

OK, not only did that NOT help my loss rate, but what's up 
with these negative numbers?  Worse, I sometimes got output 
on FreeBSD like:

Snort analyzed -249022464 out of -1275082956 packets, The 
kernel dropped
1452753881(113.934%) packets

How do I drop more than 100%?  I began to wonder if I was 
overrunning a variable somewhere in the code.  The negatives 
only happened when using the -N flag though.

So that's one set of issues, but I was also concerned about 
the hardware side of the packet loss.  Even with no 
preprocessors, no packet logging, fast alerts, and ONE rule, 
I still had a double-digit drop rate.  At no point, on either 
OS, was a CPU pegged while snort was running.  Lately I've 
been thinking that perhaps the bottleneck is the system bus.  
My gig interfaces are plugged into a 32-bit PCI bus, not 64, 
and the mobo is using PC-133 SDRAM on a 133MHz FSB.  I 
imagine that if i had a system with the newest Intel 850e 
chipset, a 2.53GHz P4 and 1066MHz RDRAM with an FSB of 
533MHz, the drop rate would shrink considerably.  But I can't 
be sure, and don't want to make the investment without 
testing.  I've also looked into load-balancing switches, but 
the price tag on those is prohibitive.

So that's my story.  If anyone on this list has ideas about 
how I could tune for better performance in the kernels, I'd 
love to hear it.

Cheers,
Adam D'Amico



-------------------------------------------------------
This sf.net email is sponsored by: Dice - The leading online 
job board for high-tech professionals. Search and apply for 
tech jobs today! http://seeker.dice.com/seeker.epl?rel_code=31
_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe: 
https://lists.sourceforge.net/lists/listinfo/s> nort-users

Snort-users list archive: 
http://www.geocrawler.com/redir-sf.php3?list=snort-users




-------------------------------------------------------
This sf.net email is sponsored by: Dice - The leading online job board
for high-tech professionals. Search and apply for tech jobs today!
http://seeker.dice.com/seeker.epl?rel_code=31
_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe:
https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:
http://www.geocrawler.com/redir-sf.php3?list=snort-users


Current thread: