tcpdump mailing list archives

Re: [PATCH] tcpdump -s 0 improvement


From: Gianluca Varenni <Gianluca.Varenni () riverbed com>
Date: Wed, 30 Nov 2011 03:48:13 +0000

Is there a specific reason why shared memory is implemented in such a way that frame buffers are allocated based on the 
maximum frame supported frame size (+junk, see 802.11)? In virtualized environments or in general when you have HW 
offloading, the maximum frame size seen by the kernel tap is several tens of kilobytes. An extreme (and yet real) 
scenario: on a VM running on ESXi with the vmxnet3 NICs,  the maximum frame size is 64k. This means that with small to 
average size packets the memory overhead is pretty high, to the point that I question the advantages of having shared 
memory.

Have a nice day
GV

-----Original Message-----
From: tcpdump-workers-owner () lists tcpdump org [mailto:tcpdump-workers-owner () lists tcpdump org] On Behalf Of Guy 
Harris
Sent: Monday, November 28, 2011 1:16 PM
To: tcpdump-workers () lists tcpdump org
Subject: Re: [tcpdump-workers] [PATCH] tcpdump -s 0 improvement

...

See the current trunk (and 1.2 branch) of libpcap, which already does this.  In particular, see the comment

        /* Note that with large snapshot length (say 64K, which is the default
         * for recent versions of tcpdump, the value that "-s 0" has given
         * for a long time with tcpdump, and the default in Wireshark/TShark),
         * if we use the snapshot length to calculate the frame length,
         * only a few frames will be available in the ring even with pretty
         * large ring size (and a lot of memory will be unused).
         *
         * Ideally, we should choose a frame length based on the
         * minimum of the specified snapshot length and the maximum
         * packet size.  That's not as easy as it sounds; consider, for
         * example, an 802.11 interface in monitor mode, where the
         * frame would include a radiotap header, where the maximum
         * radiotap header length is device-dependent.
         *
         * So, for now, we just do this for Ethernet devices, where
         * there's no metadata header, and the link-layer header is
         * fixed length.  We can get the maximum packet size by
         * adding 18, the Ethernet header length plus the CRC length
         * (just in case we happen to get the CRC in the packet), to
         * the MTU of the interface; we fetch the MTU in the hopes
         * that it reflects support for jumbo frames.  (Even if the
         * interface is just being used for passive snooping, the driver
         * might set the size of buffers in the receive ring based on
         * the MTU, so that the MTU limits the maximum size of packets
         * that we can receive.)
         *
         * We don't do that if segmentation/fragmentation or receive
         * offload are enabled, so we don't get rudely surprised by
         * "packets" bigger than the MTU. */

which indicates why doing this correctly is a little more complicated than one might initially think.

If somebody has recommendations for ways to handle 802.11 interfaces and offloading that *never* cause packets to be 
cut shorter than the specified snapshot length - or, ideally, a way to handle all interfaces under all conditions, in a 
fashion that won't break when somebody changes something in the kernel or drivers - I'd lover to hear them.- This is 
the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Current thread: