I'm trying to read binary data from a buffer file which is continuously written to by a different process (that I cannot modify). I'm using the following code in order to open the file:
fileH = CreateFileA((LPCSTR)filename,
GENERIC_READ,
FILE_SHARE_READ | FILE_SHARE_WRITE,
NULL,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL, NULL);
And it opens correctly with no error. However, when I read data from the file, it seems to block the other process from writing to the file since I loss data.
The buffer is circular, meaning that the file size is fixed, and new data is constantly written over older data in the buffer.
EDIT: Sometimes the most trivial solution works...
I've contacted the software company and told them about the bug, and within a day they posted a new version with a fix. Sorry this cannot work for everybody.
'r+' opens the file for both reading and writing. On Windows, 'b' appended to the mode opens the file in binary mode, so there are also modes like 'rb', 'wb', and 'r+b'. Also reading then writing works equally well using 'r+b' mode, but you have to use f.
During the actual reading and writing, yes. But multiple processes can open the same file at the same time, then write back. It's up to the actual process to ensure they don't do anything nasty. If your writing the processes, look into flock (file lock).
On Linux or Unix systems, deleting a file via rm or through a file manager application will unlink the file from the file system's directory structure; however, if the file is still open (in use by a running process) it will still be accessible to this process and will continue to occupy space on disk.
It's hard to say what your options are without knowing how the writing process is opening the file. Obviously, it's not opening the file for exclusive access and keeping it open. Otherwise you wouldn't be able to read it at all.
The behavior you describe indicates that the writing process opens the file for exclusive access, writes to it, and then closes the file. If that's the case, then you can't have your program open the file and keep it open. That would cause the writing process to fail whenever it tried to write.
If you can't modify the writing process, then your options are limited and not very attractive. Most likely, you'll have to make your program open the file, read a small chunk, close the file, and then wait for a bit before reading again. Even then, there's no guarantee that you won't have the file open when the writing process tries to write. Which, I think, you have already discovered.
Do you know if the writing process loses the data when it can't open the file, or if it just buffers the data and writes it the next time it can actually open the file? If that's the case, then my suggestion of stepping through the file a little at a time could work. Otherwise, you're going to lose data.
There is no open mode that I know of that is the equivalent of "Open the file for reading, but if somebody wants exclusive access, then let them have it."
Another possibility would be to have your program rename the file whenever you want to read, and then delete the renamed file after you've read it. This assumes, of course, that the writing process will create a new file if necessary. Even then, there might be a problem if the writing process tries to write while you're renaming. I don't think that'll be a problem (the rename could be atomic as far as the file system is concerned), but it's something you'd have to research.
I would recommend looking at the source code of the excellent Far Manager. Its internal viewer can handle multi-gigabyte files easily, has no problems showing files which are being written and can update the changed file contents almost in real-time. I've never noticed any blocking issues with the files being displayed.
The source code related to the question seems to be in the viewer.cpp file.
One interesting thing is that it does not use GENERIC_READ
:
ViewFile.Open(strFileName, FILE_READ_DATA, FILE_SHARE_READ|FILE_SHARE_WRITE|FILE_SHARE_DELETE, nullptr, OPEN_EXISTING);
I suspect dropping SYNCHRONIZE
might be important here.
The file change detection is in Viewer::ProcessKey
, KEY_IDLE
case:
// Smart file change check -- thanks Dzirt2005
//
bool changed = (
ViewFindData.ftLastWriteTime.dwLowDateTime!=NewViewFindData.ftLastWriteTime.dwLowDateTime ||
ViewFindData.ftLastWriteTime.dwHighDateTime!=NewViewFindData.ftLastWriteTime.dwHighDateTime ||
ViewFindData.nFileSize != NewViewFindData.nFileSize
);
if ( changed )
ViewFindData = NewViewFindData;
else {
if ( !ViewFile.GetSize(NewViewFindData.nFileSize) || FileSize == static_cast<__int64>(NewViewFindData.nFileSize) )
return TRUE;
changed = FileSize > static_cast<__int64>(NewViewFindData.nFileSize); // true if file shrank
}
Cached file reading is implemented in cache.cpp. But there's nothing really earth-shattering there, just some Seek()
and Read()
(that eventually result in SetFilePointerEx
and ReadFile
API calls). OVERLAPPED is not used.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With