Snort mailing list archives

Re: [Snort-sigs] Matching the beginning or end of a (preprocessor) content buffer


From: Mike Cox <mike.cox52 () gmail com>
Date: Fri, 9 Nov 2012 11:23:45 -0600

On Fri, Nov 9, 2012 at 9:13 AM, Joel Esler <jesler () sourcefire com> wrote:
On Nov 9, 2012, at 9:55 AM, Mike Cox <mike.cox52 () gmail com> wrote:

So I can probably do some tests when I get the time (thanks for the
responses BTW), but I'm somewhat concerned with the comment, "...it
would be against static pcaps which doesn't test performance.  (Some
people think that looping a pcap through a system a bunch of times
test performance..)"

Can you elaborate on this?


We've heard of people testing performance by taking a big pcap and looping
it through their engine many times and thinking that's a "real world"
performance test.  (Which in reality, it's a test of how fast your hard
drive can be read ;)

I understand that using the '-r' option to tell Snort to read a pcap
will not test performance of things like bandwidth, dropped packets,
etc.  However, in a case like this when you want to test *relative*
performance between rules, is Performance Profiling not accurate for
thing like avg_ticks, total_ticks, etc.?  Does the engine not load the
rules, build the matching data structures/logic, and process thing the
same way when the '-r' option is used?  Let me say again that I am
asking about relative performance numbers between rules, not absolute
numbers necessarily.


Yeah…. ehh….

So..  Here's the deal.  If you are testing a rule against a pcap that you
know is going to fire, you are going to get a performance number.  That
performance number is relative to that pcap (No matter how big your pcap
is).  You can do some tweaking to a rule to get better performance against
that pcap, but there is no accounting for how the rule will actually work in
the real world.

I'll give you a completely awful example, but I am hoping you will look past
the example and not debate me on the merits of this example ;)  (Not you
Mike, but someone else on the list might feel like being pedantic or
argumentative and do so)

content:"User-Agent|3a 20|"; content:"badstuff";

You run this against any static pcap, and you will get "x" number.  Then you
can change the rule to read:

content:"User-Agent|3a 20|": content:"badstuff|0d 0a|";

You'll get a better performance number and you'll get "y" number, which is
better than "x" and think "well I improved the performance of the rule"  And
you did.  Against that pcap.  However, in the real world, your fast pattern
match is "User-Agent|3a 20|" which will match on almost every http session
there is.

Performance can be measured in many ways and the Snort Performance
Profiling takes in to account many of these.
Sure, in these examples the fast-pattern matcher will default to the
longest string which is, "User-Agent: ".
So when you are looking at specific rule performance, I don't see how
a rule that has to match on two additional bytes can be more efficient
than one that doesn't (if all other things are equal).


We test against pcaps all day.  Constantly.  Just about every rule we have
in the VRT ruleset has a pcap and exploit associated with it.  But it's no
match for the real thing.

Pcaps *are* the real thing.  Again, I'm only talking about relative
rule performance, not data speeds, etc.

TL;DR -- You can test all you want against pcaps, at the end of the day,
it's meaningless.  Real World traffic mix is where it's at.  You want big
packets, small packets, complex packets, simple packets, etc.

This is confusing.  Network data is network data, no matter how it is
generated ... it still sounds like using the '-r' option to tell Snort
to read a pcap file is different from telling Snort to process data
over an interface ('-i').  Is this right?  Pcap files can contain,
"big packets, small packets, complex packets, simple packets, etc." so
I'm confused about the cognitive disparity here.

To be clear, I'm not talking about relative performance metrics across
multiple pcaps in Snort using the '-r' option, I'm talking about
metrics generated by a single pcap (or a network feed from an
interface), which contains data to be evaluated by the engine which is
configured to use multiple rules, and those rules are the basis for
the relative performance evaluations.

Thanks.

-Mike Cox

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_nov
_______________________________________________
Snort-devel mailing list
Snort-devel () lists sourceforge net
https://lists.sourceforge.net/lists/listinfo/snort-devel
Archive:
http://sourceforge.net/mailarchive/forum.php?forum_name=snort-devel

Please visit http://blog.snort.org for the latest news about Snort!


Current thread: