In C/C++ under Linux, I need to allocate a large (several gigabyte) block of memory, in order to store real-time data from a sensor connected to the ethernet port and streaming data at about 110MB/s. I'd like to allocate the largest amount of memory possible, to maximise the length of data sequence that I can store. However, I also need to make sure that there will be no disk-swapping, since the resulting delay and limited bandwidth of disk access causes the sensor's (very limited) buffer to overflow.
What is the best way to determine how much memory to allocate? Am I limited to just allocating a slightly smaller block than the reported free memory, or can I interface more directly with the linux virtual memory manager?
Well, under linux you can use mlock()/mlockall() to keep an adress range in physical memory and prevent it from being swapped out. The process using mlock needs a couple of privileges to do so, "man mlock" has the details. I am not sure about the maximum mlock'able block (it might differ from what seems to be "free"), so probably a binary search could help (lock a range, if that fails reduce the size of the area etc..)
On the other hand, 110MB/s is not really a problem for a Solid-State-Drive. A 60GB SSD with 280MB/s write speed costs about $200 on the corner. Just copy the sensor data into a small write buffer and stream that to the SSD.
If the computer system is dedicated to receiving data from your sensor, you can simply disable swap. Then allocate as big buffer as you can, leaving enough memory in the system only for essential tools.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With