tcpdump mailing list archives

Re: Libpcap on VMWare


From: Vikram Roopchand <vikram.roopchand () j-interop org>
Date: Tue, 12 Jan 2010 15:47:24 +0530

I forgot to mention, we do set a filter (Host IP and TCP Port) on libpcap.

thanks again,
best regards,
Vikram

On Tue, Jan 12, 2010 at 3:08 PM, Vikram Roopchand <
vikram.roopchand () j-interop org> wrote:

Hello There,
                 This is similar in nature to
http://article.gmane.org/gmane.network.tcpdump.devel/4256 posting (which
is unfortunately unsolved). We are using jnetpcap which is a wrapper over
libpcap. Mark Bednarczyk posted the original query (4256).

--------------------------------------

A little background:-

We are experiencing massive packet drops in libpcap while working with Non
Windows guests on VMWare ESXi Server . The same thing happens on VMplayer
(Host OS - Windows). We have tested on Ubuntu 8.04, FC11 and Debian , the
library seems to drop packets every where. The load being subjected to is
not much but is constant (TCP packets of 1200 - 1500 bytes consistently).

The packet drops DO NOT occur on Windows Guest OSs (both via ESXi and
VMPlayer). They only happen when we are working with non-Windows guests.

Libpcap version from Ubuntu:-

Libpcap (by dpkg) : ii  libpcap0.8     0.9.8-2        System interface for
user-level packet capture.

---------------------------------------

As a temporary measure, we initially thought we could need to increase the
socket receive buffer size as someone did here
http://www.winpcap.org/pipermail/winpcap-users/2006-October/001521.html .
We tried configuration given in the link and it reduced packet drops
substantially. To about 2% from over 20% earlier but still not to zero.

Being new to Libpcap (and Linux) , we are still struggling with some basic
understanding and would be grateful if someone could set us on track.

1. What we did with these commands

sysctl -w net.core.rmem_max=4194304
sysctl -w net.core.rmem_default=4194304

was to increase the Linux socket size so that when libpcap opens a socket
to the BPF device it uses this size (of 4M here). Is this understanding
correct ?

2. In the libpcap source pcap "pcap-bpf.c" , at line 1618 (from
http://github.com/mcr/libpcap/blob/117cb5eb2eb4fe212d3851f1205bb0b8f57873c6/pcap-bpf.c)
, it says

"We don't have a zero copy BPF, set the buffer size" . May I know what this
means ? What does this buffer size mentioned in the comment represent ? Does
Libpcap have it's own buffer other than the Socket buffer ? And on the
subsequent lines it says

/*
* No buffer size was explicitly specified.
*
* Try finding a good size for the buffer;
* DEFAULT_BUFSIZE may be too big, so keep
* cutting it in half until we find a size
* that works, or run out of sizes to try.
* If the default is larger, don't make it smaller.
*/

DEFAULT_BUFSIZE is 512K.

So I am a a bit confused :( ...  When we used these commands sysctl -w
net.core.rmem_max=4194304 and sysctl -w net.core.rmem_default=4194304. What
is it that we did ? Does libpcap have its own buffer where it copies packet
frames from Linux Socket ? If so , how do we configure it from outside so
that we can increase it's size also ? We got this link
http://public.lanl.gov/cpw/README.ring.html which talks about various
environment variables (PCAP_FRAMES to be precise) that can be used to
configure libpcap but I am not sure if this gentleman compiled his own
libpcap version or this is applicable to standard distro as well.

May we also know what is this ring buffer people keep talking about ? Does
libpcap standard distro have a ring buffer (related to the question above) ?
And can PCAP_MEMORY or PCAP_FRAMES environment variable help increase it (as
in the link above and here http://seclists.org/snort/2009/q1/209) ? We
really want to try that ... I don't think this could be a VMWare issue.

I apologize about so many questions, would be grateful if anyone could
throw some light here. And please feel free to direct me to any available
literature (I really did look but now am stumped).

thanks a lot,
best regards,
Vikram

-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Current thread: