Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pcap Dropping Packets

// Open the ethernet adapter
handle = pcap_open_live("eth0", 65356, 1, 0, errbuf);

// Make sure it opens correctly
if(handle == NULL)
{
    printf("Couldn't open device : %s\n", errbuf);
    exit(1);
}

// Compile filter
if(pcap_compile(handle, &bpf, "udp", 0, PCAP_NETMASK_UNKNOWN))
{
    printf("pcap_compile(): %s\n", pcap_geterr(handle));
    exit(1);
}

// Set Filter
if(pcap_setfilter(handle, &bpf) < 0)
{
    printf("pcap_setfilter(): %s\n", pcap_geterr(handle));
    exit(1);
}

// Set signals
signal(SIGINT, bailout);
signal(SIGTERM, bailout);
signal(SIGQUIT, bailout);

// Setup callback to process the packet
pcap_loop(handle, -1, process_packet, NULL);

The process_packet function gets rid of header and does a bit of processing on the data. However when it takes too long, i think it is dropping packets.

How can i use pcap to listen for udp packets and be able to do some processing on the data without losing packets?

like image 874
John Smith Avatar asked Oct 22 '22 02:10

John Smith


1 Answers

Well, you don't have infinite storage so, if you continuously run slower than the packets arrive, you will lose data at some point.

If course, if you have a decent amount of storage and, on average, you don't run behind (for example, you may run slow during bursts buth there are quiet times where you can catch up), that would alleviate the problem.

Some network sniffers do this, simply writing the raw data to a file for later analysis.

It's a trick you too can use though not necessarily with a file. It's possible to use a massive in-memory structure like a circular buffer where one thread (the capture thread) writes raw data and another thread (analysis) reads and interprets. And, because each thread only handles one end of the buffer, you can even architect it without locks (or with very short locks).

That also makes it easy to detect if you've run out of buffer and raise an error of some sort rather than just losing data at your application level.

Of course, this all hinges on your "simple and quick as possible" capture thread being able to keep up with the traffic.


Clarifying what I mean, modify your process_packet function so that it does nothing but write the raw packet to a massive circular buffer (detecting overflow and acting accordingly). That should make it as fast as possible, avoiding pcap itself dropping packets.

Then, have an analysis thread that takes stuff off the queue and does the work formerly done in process_packet (the "gets rid of header and does a bit of processing on the data" bit).


Another possible solution is to bump up the pcap internal buffer size. As per the man page:

Packets that arrive for a capture are stored in a buffer, so that they do not have to be read by the application as soon as they arrive.

On some platforms, the buffer's size can be set; a size that's too small could mean that, if too many packets are being captured and the snapshot length doesn't limit the amount of data that's buffered, packets could be dropped if the buffer fills up before the application can read packets from it, while a size that's too large could use more non-pageable operating system memory than is necessary to prevent packets from being dropped.

The buffer size is set with pcap_set_buffer_size().


The only other possibility that springs to mind is to ensure that the processing you do on each packet is as optimised as it can be.

The splitting of processing into collection and analysis should alleviate a problem of not keeping up but it still relies on quiet time to catch up. If your network traffic is consistently more than your analysis can handle, all you're doing is delaying the problem. Optimising the analysis may be the only way to guarantee you'll never lose data.

like image 143
paxdiablo Avatar answered Oct 27 '22 20:10

paxdiablo