Snort mailing list archives

snort behavior in very high-load environment, BSD vs. linux


From: "Adam D'Amico" <adamico () speakeasy net>
Date: Tue, 30 Jul 2002 18:43:21 -0400

Hello,

I've been working with snort for a while now in an environment that
seems to be on the bleeding edge of what should be snortable.  I've
gotten predictable results in some spots and weirdness in others.
I thought I would share my results with everyone here, in the hope
that someone might get use out of them, and maybe even have decent
explanations for the weirdness.  I've read through a lot of the
previous threads having to do with packet loss and system tuning,
but not much of it was applicable, given the network environment
I'm running in.

I've got two identical boxes with two identical gig-e feeds running
into them.  The hardware is dual P3 1.26GHz, 1GB RAM, plenty of
7200rpm IDE disk, and Intel pro1000-T adapters.  On the copper there
is a steady stream from our backbone of around 55-70kpps, weighing
in at between 300-400Mbps, depending on time of day.  The traffic is
not filtered or firewalled in any way, so we see everything you
could possibly imagine, including tens of thousands of attacks and
exploit attempts daily.

Here's where it gets interesting... I'm running FreeBSD 4.6 on one
of the boxes, and RedHat 7.3 on the other.  I've optimized the 2.4
linux kernel to some degree for packet sniffing, but didn't really
know how to do that for the BSD kernel.

In trying to decide which OS to keep as the production environment
for the IDS, I've been racing the systems against one another with
identical traffic streams and identical snort 1.8.7 configurations.

Now, I've read/heard for a long time that the BSD packet capture
ability is far better than that in linux, even with the new 2.4
optimizations.  I was hoping to confirm or deny that with some of
the tests I ran, but I wasn't able to interpret the results quite
that way, mostly because of my own ignorance on the finer mechanics
of kernel behavior.

So on to the numbers, starting with basic stuff.  Running
tcpdump -qni eth2 > /dev/null
on both boxes for the same 10-second period gives output like this:

linux:    665723 packets received by filter
          0 packets dropped by kernel
FreeBSD:  673301 packets received by filter
          31367 packets dropped by kernel

OK, so this seems to support the assertion that BSD is better at
sniffing, but there also seems to be some skew between what each
OS calls a dropped packet.

Running snort overnight with a command line like
snort -c snort.conf -i eth2 -b -A fast
gave output like this:

linux:
============================================================================
===
Snort analyzed 1793770624 out of 1820533437 packets, The kernel dropped
0(0.000%) packets

Breakdown by protocol:                Action Stats:
    TCP: 1737510100 (95.440%)         ALERTS: 25663
    UDP: 46480933   (2.553%)          LOGGED: 25663
   ICMP: 5117982    (0.281%)          PASSED: 0
    ARP: 6745       (0.000%)
   IPv6: 0          (0.000%)
    IPX: 0          (0.000%)
  OTHER: 4654819    (0.256%)
DISCARD: 1          (0.000%)
============================================================================
===
FreeBSD:
============================================================================
===
Snort analyzed 1546382464 out of 1825950223 packets, The kernel dropped
258626839(14.164%) packets

Breakdown by protocol:                Action Stats:
    TCP: 1495941340 (81.927%)         ALERTS: 23199
    UDP: 41551209   (2.276%)          LOGGED: 23199
   ICMP: 4684537    (0.257%)          PASSED: 0
    ARP: 6226       (0.000%)
   IPv6: 0          (0.000%)
    IPX: 0          (0.000%)
  OTHER: 4199210    (0.230%)
DISCARD: 1          (0.000%)
============================================================================
===

Again, skew between definitions of packet drop?  But more importantly,
even though BSD has appeared to capture packets better, should I
trust linux for snort since it picked up more alerts?  And why did
linux claim such a higher percentage of TCP packets?

Thinking I could speed up snort's performance by not logging any
packets, I tried
snort -c snort.conf -i eth2 -N -A fast
and got:

linux:
============================================================================
===
Snort analyzed -906323712 out of -843192947 packets, The kernel dropped
0(0.000%) packets

Breakdown by protocol:                Action Stats:
    TCP: -1027885735 (94.649%)         ALERTS: 304136
    UDP: 99500448   (2.883%)          LOGGED: 304136
   ICMP: 8956378    (0.259%)          PASSED: 0
    ARP: 10166      (0.000%)
   IPv6: 0          (0.000%)
    IPX: 0          (0.000%)
  OTHER: 13095090   (0.379%)
DISCARD: 0          (0.000%)
============================================================================
===
FreeBSD:
============================================================================
===
Snort analyzed -1634877952 out of -796656885 packets, The kernel dropped
793215694(22.674%) packets

Breakdown by protocol:                Action Stats:
    TCP: -1735488330 (73.163%)         ALERTS: 275094
    UDP: 82257816   (2.351%)          LOGGED: 275094
   ICMP: 7711814    (0.220%)          PASSED: 0
    ARP: 8948       (0.000%)
   IPv6: 0          (0.000%)
    IPX: 0          (0.000%)
  OTHER: 10631823   (0.304%)
DISCARD: 0          (0.000%)
============================================================================
===

OK, not only did that NOT help my loss rate, but what's up with these
negative numbers?  Worse, I sometimes got output on FreeBSD like:

Snort analyzed -249022464 out of -1275082956 packets, The kernel dropped
1452753881(113.934%) packets

How do I drop more than 100%?  I began to wonder if I was overrunning
a variable somewhere in the code.  The negatives only happened when using
the -N flag though.

So that's one set of issues, but I was also concerned about the hardware
side of the packet loss.  Even with no preprocessors, no packet logging,
fast alerts, and ONE rule, I still had a double-digit drop rate.  At no
point, on either OS, was a CPU pegged while snort was running.  Lately
I've been thinking that perhaps the bottleneck is the system bus.  My
gig interfaces are plugged into a 32-bit PCI bus, not 64, and the mobo
is using PC-133 SDRAM on a 133MHz FSB.  I imagine that if i had a system
with the newest Intel 850e chipset, a 2.53GHz P4 and 1066MHz RDRAM with
an FSB of 533MHz, the drop rate would shrink considerably.  But I can't
be sure, and don't want to make the investment without testing.  I've
also looked into load-balancing switches, but the price tag on those is
prohibitive.

So that's my story.  If anyone on this list has ideas about how I could
tune for better performance in the kernels, I'd love to hear it.

Cheers,
Adam D'Amico



-------------------------------------------------------
This sf.net email is sponsored by: Dice - The leading online job board
for high-tech professionals. Search and apply for tech jobs today!
http://seeker.dice.com/seeker.epl?rel_code=31
_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe:
https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:
http://www.geocrawler.com/redir-sf.php3?list=snort-users


Current thread: