I have a robot running control code with real time priority on a PREEMPT-RT patched Linux OS on a Beaglebone Black. All code is written in C and is running at 500Hz.
I've noticed latency in the range of a few hundred milliseconds every so often when running the code and I've tracked it down to the data logging function I wrote. This latency causes my robot's control to fail as I have a lot depending on the real-time functionality.
The relevant portion of code is below. I've cut a lot of code for clarity, but I'll edit this post if anything is needed.
FILE *file;
int main(int argc, char** argv) {
file = fopen(logname, "w");
while (1) {
/* Control code stuff*/
logData();
time_msec = time_msec + controlLoopTime;
}
}
void logData() {
if (time_msec - logTimer_msec >= LOG_TIMER) {
logTimer_msec = time_msec;
if (!bLogCreated) {
fprintf(file,
"SensorData1 SensorData2 SensorDataN"
);
bLogCreated = TRUE;
}
// log data to file
fprintf(file,
"%.2f %.2f\n",
sensorData1, sensorData2, sensorDataN
);
}
}
I need to log data from multiple variables (probably 20-50) at a good rate, maybe 100-125Hz. Data doesn't need to be logged at the control rate (every 2ms), but I've decreased it to 12ms and I still see latency spikes every few minutes.
The latency may be an issue with the fprintf
call. Is this a limitation of the BeagleBone Black, my code, or just the nature of data logging?
A similar question was asked here but didn't seem to address my issue: Finding latency issues (stalls) in embedded Linux systems
Using fprintf
is a huge time sink, particularly for R/T logging. Do the logging in binary and write a utility to print it out later.
Instead of:
fprintf(file,"%.2f %.2f %.2f",data1,data2,data3);
Do:
fwrite(&data1,sizeof(double),1,file);
fwrite(&data2,sizeof(double),1,file);
fwrite(&data3,sizeof(double),1,file);
Even better:
struct data {
double data1;
double data2;
double data3;
time_t event_time;
...
};
struct data data;
fwrite(&data,sizeof(struct data),1,file);
If it's still too slow, append the struct to a ring queue and have a separate thread write out the entries.
If the disk write can't keep up with the [now] binary data, maintain the ring queue and only dump out the queue post-mortem if you detect a fatal error
Also, consider using mmap
to access the file when writing. See my answer [with benchmarks] here: read line by line in the most efficient way *platform specific*
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With