tcpdump mailing list archives

Re: Multi process sniffing and dropped packets


From: Robert Lowe <Robert.H.Lowe () lawrence edu>
Date: Thu, 12 Jan 2006 22:58:11 -0600

Gianluca Varenni wrote:

BUT..is pcap library able to manage safety multi
process (or maybe multi thread) calls with the same
pcap_t handle in each process ?


No. The pcap_t handle is not guaranteed to be thread-safe. Specifically, every packet returned by pcap_next (or pcap_next_ex) is valid until the next call to pcap_next_ex, pcap_close (or pcap_loop/dispatch).


Any suggestion?



Depending on the work you need to do on every packet, I would probably have a thread receiving all the packets, copies them (or part of them, you will probably need very few bytes of each packet) and dispatches them to a number of processing threads. An issue is how to balance the packets between the processing threads.

I've done this before, and it works well.  I used one thread to do all
the packet capture work, create a data structure with the parts of the
packet I was interested in, and push this into a "work" queue.  Another
thread did the same with connection requests intended for a web server.
A pool of threads with sychronized access (condition variable with a mutex) to the queue pulled items out and finished the work (getting
information from the IP header, and encoding it in an HTTP redirect
URL over the opened connection).  I was able to get it to handle
several hundred requests per second without too much trouble --
although if pushed too hard, it would exhaust file descriptors for
new connections, so I have no idea what the real ceiling is for the
packet capture part.  But, the idea proved very workable, especially
if the work to be done is greater than what is necessary to place
entries in the queue.  Only idle threads are watching the queue, so
work tends to balance across the thread pool pretty evenly.

-Robert
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Current thread: