tcpdump mailing list archives

Re: reconstruct HTTP requests in custom sniffer


From: kay <kay21s () gmail com>
Date: Wed, 29 Dec 2010 10:01:45 +0800

Hi,

I have implemented a HTTP parser one year ago. I remembered that when the
parser calculate the request-response latency, inspect the interested fields
but do not record or dump them, the speed will reach about 2Gbps on a single
core, and 8 Gbps on 6 cores. I think a 0.05Mpps parser is an easy work.

However, as you said you had to reconstruct the whole HTTP request with POST
data, that will be a different story. You need to store the previous packets
and do a memcpy() operation to concatenate them when latter packets are
received. In my experience, the cost is huge, especially the memcpy
operation. It depends on how many packets are such kind of cross-packet POST
requests. Usual GET requests do not have this issue.

Hope it helps!

--Kay


On Wed, Dec 29, 2010 at 1:22 AM, Andrej van der Zee <
andrejvanderzee () gmail com> wrote:

Hi,

I am asked to write a custom sniffer with libpcap on Linux that has to
handle a load of 50.000 packets per second. The sniffer has to detect all
HTTP requests and dump the URI with additional information, such as request
size and possibly response time/size. The packets, destined for the
load-balancer, are duplicated by the switch using port-mirroring to my own
machine. It is important that our solution is 100% non-intrusive to the web
application being monitored.

Probably I need to access the POST data of certain HTTP requests. Because
HTTP requests are, obviously, broken into multiple packets, is it feasible
to reconstruct the whole HTTP request with POST data from multiple packets?

Regarding the load of 50.000 packets a second, is this expected to be a
problem?

Any feedback is very appreciated!

Cheers,
Andrej
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.

-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Current thread: