tcpdump mailing list archives

Re: pcap_stats reporting incorrect value in ps_drop on solaris


From: Guy Harris <gharris () sonic net>
Date: Wed, 19 Feb 2003 00:10:30 -0800

On Mon, Feb 17, 2003 at 05:29:02PM +1100, Rebecca.Callan () ir com wrote:
I was using libpcap-0.7.1 and found on solaris that it was always reporting
no dropped packets even though I suspected that some had been dropped. I got
the latest from CVS (libpcap-2003.02.16) which includes a couple of changes
that were supposed to fix this but now I am getting extremely large values
for number of dropped packets (over 100 times more dropped packets than the
number of packets received).

Has anyone else had this problem? Any ideas on what could be the problem.

The problem could be that the code in libpcap was buggy, but that, as it
wasn't ever getting any drops reported due to the earlier bug, nobody
noticed.  (0 is 100 times more than 0, as well as being 1000 times more
than 0, and 1,000,000,000 times more than 0, but you won't notice the
extra factor. :-))

The Solaris 2.4 and Solaris 9 man pages for bufmod say

        sbh_drops reports the cumulative number of input messages that
        this instance of bufmod has dropped due to flow control or
        resource exhaustion.

and "cumulative" presumably means that it contains the number of packets
that have been dropped *since the capture was started*, not since the
last chunk of packets was delivered.  Therefore, it shouldn't be *added*
to the count of dropped packets being maintained by libpcap; that count
should be set to the value of "sbh_drops".

It was adding it to the count, so you'd get a lot more packets reported
as dropped than were dropped.  I've checked in a change to make it set
the count to that value; try the next CVS snapshot (2003.02.19) when
it comes out, to see if that fixes the problem.
-
This is the TCPDUMP workers list. It is archived at
http://www.tcpdump.org/lists/workers/index.html
To unsubscribe use mailto:tcpdump-workers-request () tcpdump org?body=unsubscribe


Current thread: