tcpdump mailing list archives

Re: HUGE packet-drop


From: "M. V." <bored_to_death85 () yahoo com>
Date: Mon, 7 Feb 2011 01:13:25 -0800 (PST)

thank you all for your comments,


I see code for tpacket support in the 2.4.20 source (two dot four dot twenty, 
not two dot six dot anything);
I think it dates back before then (perhaps 2.4.0).  It requires 
CONFIG_PACKET_MMAP.

i checked inside "/proc/net/ptype" on 2.6.26 while running tcpdump, and i have 
tpacket_recv. also, i tried "Debian5.0.3 + Kernel 2.6.30" and "Fedora 14 - 
Kernel 2.6.35 on some other hardware" but "HUGE packet-drop" still exists :((

debian:~# cat /proc/net/ptype (Debian5.0.3 - Kernel 2.6.26.2)
Type Device      Function
ALL  eth0     tpacket_rcv+0x0
0800          ip_rcv+0x0
0011          llc_rcv+0x0
0004          llc_rcv+0x0
0806          arp_rcv+0x0
86dd          :ipv6:ipv6_rcv+0x0
------------------------------------------------------------------------------------------

[root@fedora ~]# cat /proc/net/ptype 
Type Device      Function
ALL  eth0     tpacket_rcv+0x0/0x4f9
0800          ip_rcv+0x0/0x24d
0806          arp_rcv+0x0/0xe5
dada          edsa_rcv+0x0/0x244
001b          dsa_rcv+0x0/0x223
001c          trailer_rcv+0x0/0x170
86dd          ipv6_rcv+0x0/0x30a [ipv6]
------------------------------------------------------------------------------------------


1) does this mean i definitely have MMAP support? (even in 2.6.26?)
2) the strange thing is, when i use libpcap-1.0+ (1.0.0 or 1.1.1) my results 
gets much worse (and i even have packet-drop in low traffic like 100Mbps !!!), 
but with libpcap-0.9.8 it's better and packet-drop starts on 300Mbps or so. what 
does this mean?

biggest challenge usually is to have a disk system that is fast enough to write 

the stream of packets to disk. You might want to check this first.

i tried capturing on SSD and RAMDisk too, but result didn't change, so i think 
(for now) my problem is something else.


      -
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Current thread: