Nmap Development mailing list archives

Writing high-performance npcap application


From: Jan Danielsson <jan.m.danielsson () gmail com>
Date: Wed, 27 Apr 2022 20:21:44 +0200

Hello,

[The npcap page said it was ok to use nmap mailing list for npcap related questions. If there's a more appropriate forum, please point me to it.]

I'm working on an application that requires very high transfer rates of raw ethernet packets. As a reference, we use libpcap on unixy platforms and are able to saturate a 1Gbit/s link, with zero packet loss. A few customers need Windows support, so we're looking to use/license npcap for this purpose.

Thanks to the pcap compatibility, porting this application (I use the pcap Rust crate, which is a thin wrapper on top of pcap library) was mostly trivial, but I ran into some things which may either be me not reading properly or documentation deficiencies.

In the test applications (there's a sender and a receiver) I send a header packet from the sending application, then I send all the test packets (all are indexed using a counter). The packets are sent in batches. Between each batch the sender pauses slightly. Finally it sends an "end" packet. The receiver will make sure it receives each packet, and when it receives the "end" packet it'll output any issues it encountered (missed packets, wrong order), and a transfer rate.

I'm having trouble with the receiver (more on that later), but this concerns sender (npcap) -> receiver (libpcap on linux):

At first I got a pretty abysmal performance because I used pcap_sendpacket(). This was expected, so I implemented sendqueue support into the pcap crate, and used that instead. This however did not work -- I kept running out of memory. This was the first minor stumbling block: I thought that one could reuse a sendqueue buffer (i.e. it implicitly gets reset after a transmission), but that does not seem to be the case? When I rewrote the code to allocate/free a new sendqueue for each batch, then it worked. And I got _really_ good performance, as well. Just to be clear: Have I understood it correctly that the sendqueue does not autoreset after transmission, and I need to allocate a new sendqueue for each batch?

However, when I ran a long test, I got an error which says that some resources were exhausted. I obviously need to double-check that it's actually releasing the sendqueue on each iteration -- but I'm pretty sure it does. However, I'm sending *a lot* of packets. Is there any known resource leak in npcap when sending very many packets using sendqueues?

The application uses a custom ethertype (I don't want to operating systems to waste cycles trying to make sense of the protocol).

The receiver is in much worse shape. It will receive a number of packets (a few thousand, IIRC) and then simply stop receiving new packets.

Are there any special considerations one must take into account when trying to receive packets at a high rate? At first I thought the capture buffer may be overflowing (it was set at 1MB), but when I increased it to 16MB it stopped at roughly the same number of packets. (The application does not try to store any data on the receiver -- it just makes receives the packet, checks that its index matches the expected index, and then throws away the packet).


Both the sender and receivers work fine on linux and is (at source level) identical to the windows version, except for that the sender uses raw sockets on linux, while it uses sendqueue on Windows.

The receiver is using the latest npcap on a Windows/x64 11 (with latest (stable) updates).

--
Kind Regards,
Jan
_______________________________________________
Sent through the dev mailing list
https://nmap.org/mailman/listinfo/dev
Archived at https://seclists.org/nmap-dev/


Current thread: