I wrote a download library for my colleague. It writes downloaded data to files.
My colleagues found that the file stays small for a long time, even if 100 Mb data have been downloaded.
So they suggest that I should call flush()
after every write()
so it will not take up memory to buffer these data.
But I don't think 100 Mb of virtual memory is a lot and think maybe windows has its reason to buffer so much data.
What do you think about it?
I would trust the operating system to tune itself appropriately, personally.
As for "flush immediately so as not to lose data if power dies" - if the power dies half way through a file, would you trust that the data you'd written was okay and resume the download from there? If so, maybe it's worth flushing early - but I'd weigh the complexity of resuming against the relative rarity of power failures, and just close the file when I'd read everything. If you see a half written file, delete it and download it again from scratch.
Well, first you should investigate / debug what is going on. The problem might be elsewhere; for example Windows Explorer might not refresh the file size fast enough.
That said, you are right, generally if the VM system of the OS decides to buffer stuff in RAM, it has a good reason to do so, and you should not normally interfere. If there is a lot of free memory, it makes sense to use it, after all.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With