tcpdump mailing list archives

Efficient packet copy


From: shaun () yap com au
Date: Mon, 24 Feb 2003 16:31:09 +1100


Hi All,

---- Warning, long email, skip to bottom for development proposal ----

I've been doing some benchmarking of pcap on a variety of platforms recently 
using a fairly simple program that just collects some basic statistics based 
on the packets seen. I've found that on most platforms it is possible to keep 
up with quite a large amount of traffic (even up to the magical 100MB/s on 
some platforms), but that capture is very sensitive to latency. That is, the 
application performing the capture must not wait a long time between reads in 
order to prevent the kernel buffer filling and packets being discarded. 

Obviously, this is a bit of a problem for many applications. In the case of 
the application we wish to embed pcap into, it would be quite possible for 
there to be seconds between reads from pcap. My immediate thought is then to 
have a new process thrown off to perform the capture (reading very quickly 
with minimal processing), that process could then feed the packets back to 
the parent process. 

The problem is then again reduced to one of efficiency, while the child 
process may be able to keep up with the packets it still needs to store them 
somewhere and efficiently pass them back to the requestor. This would be most 
efficient using shared memory so that the packet does not have to be memcpy()
d between processes. 

Unfortunately, this still leaves us with at least two memory copies for the 
packet. One from the kernel into the pcap_t data buffer, then one into the 
memory mapped file. Given that we're talking large amounts of data this is 
quite seriously inefficient. 

Thus I'd like to propose the concept of a "pcap_blob" (and volunteer to 
implement it here if we can get agreement). A pcap_blob would be a data block 
allocated by a client application and passed into pcap. The application would 
ask pcap for a minimum size for the blob (which would be the size currently 
allocated in pcap_open for the capture buffer + some space for a header). It 
would allocate some space (possibly much more than that buffer worth of 
space). It would pass the space (and it's size) to a pcap_blob_initialize 
function that would construct a header for the blob. The blob could then be 
passed into pcap_read_blob which would simply use some space in the blob for 
the current capture buffer then increment a pointer in to the blob header to 
point to the new buffer space in the blob. Thus the blob would be filled with 
capture buffers until no space remains. Other applications (potentially in 
other processes) could then use the blob with a pcap_dispatch_blob style 
function to retrieve the packets from the blob. 

I'm not sure if this is too ugly, hard or intrusive to be considered but I'd 
be interested in any feedback or other options for avoiding the repeated 
memory copy of every packet.

Thanks,
Shaun




---------------------------------------------------------
Yapmail. The funkiest web-based email system in the world.
http://www.yap.com.au
-
This is the TCPDUMP workers list. It is archived at
http://www.tcpdump.org/lists/workers/index.html
To unsubscribe use mailto:tcpdump-workers-request () tcpdump org?body=unsubscribe


Current thread: