I am using a file as a cache for big data. One thread writes to it sequentially, another thread reads it sequentially.
Can I be sure that all data that has been written (by write()
) in one thread can be read()
from another thread, assuming a proper "happens-before" relationship in terms of the Java memory model? Is this behavior documented?
In my JDK, FileOutputStream
does not override flush()
, and OutputStream.flush()
is empty. That's why I'm wondering...
The streams in question are owned exclusively by a class that I have full control of. Each stream is guaranteed to be accesses by one thread only. My tests show that it works as expected, but I'm still wondering if this is guaranteed and documented.
See also this related discussion.
Assuming you are using a posix file system, then yes.
FileInputStream
and FileOutputStream
on *nix use the read and write system calls internally. The documentation for write says that reads will see the results of past writes,
After a write() to a regular file has successfully returned:
Any successful read() from each byte position in the file that was modified by that write shall return the data specified by the write() for that position until such byte positions are again modified.
I'm pretty sure ntfs on windows will have the same read()
write()
guarantees.
You can't talk about "happens-before" relationship in terms of the Java memory model between your FileInputStream
and FileOutputStream
objects since they don't share any memory or thread. VM is free to reorder them just honoring your synchronization requirements. When you have proper synchronization between reads and writes without application level buffering, you are safe.
However FileInputStream
and FileOutputStream
share a file, which leaves things up to the OS which in main stream ones you can expect to read after write in order.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With