I'm streaming a large file ( 1Gb ) via HTTP to my server in Qt on a very memory constrained embedded Linux device. When I first receive the header I determine where to write the data on the filesystem, create a QFile pointer to that location, and open the file for appending. There is an 'accumulate' function in the server that is called each time new data arrives to the socket. From that accumulate function I want to stream the data right to the file via write(). You can see my accumulate function below.
My problem is memory usage when doing this -- I run out of memory. Shouldn't I be able to flush() and fsync() each iteration of the accumulation and not have to worry about RAM usage? What am I doing wrong and how can I fix this? Thanks -
I open my file once before the accumulate function:
// Open the file
filePointerToWriteTo->open(QIODevice::WriteOnly | QIODevice::Append | QIODevice::Unbuffered)
Here is a portion of the accumulate function:
// Extract the QFile pointer from the QVariant
QFile *filePointerToWriteTo = (QFile *)(containerForPointer->pointer).value<void *>();
qDebug() << "APPENDING bytes: " << data.length();
// Write to the file and sync
filePointerToWriteTo->write(data);
filePointerToWriteTo->waitForBytesWritten(-1);
filePointerToWriteTo->flush(); // Flush
fsync(filePointerToWriteTo->handle()); // Make sure bytes are written to disk
EDIT:
I instrumented my code and the 'waitForBytesWritten(-1)' call ALWAYS return 'false'. The docs say this should wait until data is written to the device.
Also, If I uncomment only the 'write(data)' line, then my free memory never decreases. What could be going on? How does 'write' consume so much memory?
EDIT:
Now I am doing the following. I do not run out of memory, but my free memory drops to 2Mb and hovers there until the entire file is transferred. At which point, the memory is released. If I kill the transfer in the middle, the kernel seems to hold on to the memory because it stays around 2Mb free until I restart the process and try to write to the same file. I still think I should be able to use and flush the memory each iteration:
// Extract the QFile pointer from the QVariant
QFile *filePointerToWriteTo = (QFile *)(containerForPointer->pointer).value<void *>();
int numberOfBytesWritten = filePointerToWriteTo->write(data);
qDebug() << "APPENDING bytes: " << data.length() << " ACTUALLY WROTE: " << numberOfBytesWritten;
// Flush and sync
bool didWaitForWrite = filePointerToWriteTo->waitForBytesWritten(-1); // <----------------------- This ALWAYS returns false!
filePointerToWriteTo->flush(); // Flush
fsync(filePointerToWriteTo->handle()); // Make sure bytes are written to disk
fdatasync(filePointerToWriteTo->handle()); // Specific Sync
sync(); // Total sync
EDIT:
This kind of sounds like me misunderstanding Linux caching. After reading this post --> http://blog.scoutapp.com/articles/2010/10/06/determining-free-memory-on-linux, it's possible that I am misunderstanding the output of 'free -mt'. I have been watching the 'free' field in that output and see it drop to hover around 2MB on the massive file transfer. I would just like to see it return to high levels of free data when the file transfer is done.
I think Linux is just caching everything it can and frees what it can spare around the 2MB free memory limit. I do not run out of memory when receiving or sending out ~2Gb of files on a 512 MB RAM system. In my Qt program, after receiving all of the data, appending to file, and closing the file. I do the following in a QProcess to see my 'free' memory return in the 'free -mt' command in a separate terminal:
// Now we've returned a large file - so free up cache in linux
QProcess freeCachedMemory;
freeCachedMemory.start("sh");
freeCachedMemory.write("sync; echo 3 > /proc/sys/vm/drop_caches"); // Sync to disk and clear Linux cache
freeCachedMemory.waitForFinished();
freeCachedMemory.close();
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With