Snort mailing list archives

Re: Snort with pf_ring -- recommendations for DAQ settings


From: Eugenio Perez <eugenio () redborder org>
Date: Wed, 24 Sep 2014 12:48:42 +0200

Hi Risto.



2014-09-18 13:55 GMT+02:00 Risto Vaarandi <Risto.Vaarandi () seb ee>:
Hi all,
I've been testing pf_ring DAQ module for Snort for a while, and using them together allows for creating flexible 
setups for high speed networks. However, while researching the web and mailing lists for optimal DAQ settings, I've 
found several recommendations which are somewhat confusing. Also, it is hard to find any recommendations for some DAQ 
parameters.
Firstly, I have found several postings which recommend the binding of Snort processes to CPUs with '--daq-var 
bindcpu=N' options, while other people seem to disagree with this: http://seclists.org/snort/2013/q1/208. Can anyone 
provide additional insights into this issue? (I am using sensors that have Intel 10Gbit/s cards with 16 queues.)

I would recommend you to attach each snort instance to a CPU. Imagine,
for example, that the CPU0 gets a PF_RING packet, and the packet is
later processed by the same CPU. This makes a chance for the packet to
live in CPU0 cache, and to do not need to get it from the RAM again.
If the packet has to travel to another CPU, it's more probable that
you have to get it from the RAM again.

However, a lot of stuffs affect this. Maybe linux scheduler is smart
enough to avoid it. So, I recommend you to do tests, and measure it
directly, using perfmon preprocessor, top, and any profiler tool that
you know.

Also, while browsing the lists I have often seen examples with --daq-var watermark=64 --daq-var timeout=1 settings. 
On the other hand, pf_ring DAQ module uses watermark=128 as the default, while according to strace the default 
timeout is 1000 (1 second). Are there any reasons for using watermark=64 and timeout=1 over the pf_ring defaults? So 
far, I haven't found any postings why these particular settings are used in a number of examples.
Kind regards,
risto

When the daq gets a packet, it performs the next actions:
- do a non-block read() to pf_ring socket. This is, if there is a
packet, it will get it and snort will process it.
- If not packet were read, the process will wait to socket has a
packet ready for lecture (poll). The system will wait to socket:
   * a maximum time (timeout)
   * a minimum number of packets to be ready (watermark)

This way, the timeout will be the time the poll will "block" the
system waiting "watermarks" packet. More timeout or watermark will
mean less calls to polling, but more delay processing the new packets.

If I had not explained myself, please let me know.

Regards

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe:
https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:
http://sourceforge.net/mailarchive/forum.php?forum_name=snort-users

Please visit http://blog.snort.org to stay current on all the latest Snort news!


Current thread: