I'm working with large, and growing files using the managed wrappers for memory-mapped files: MemoryMappedFile
, MemoryMappedViewAccessor
.
I create empty files using this code:
long length = 1024L * 1024L * 1L; // 1MB
// create blank file of desired size (nice and quick!)
FileStream fs = new FileStream(filename, FileMode.CreateNew);
fs.Seek(length, SeekOrigin.Begin);
fs.WriteByte(0);
fs.Close();
// open MMF and view accessor for whole file
this._mmf = MemoryMappedFile.CreateFromFile(filename, FileMode.Open);
this._view = this._mmf.CreateViewAccessor(0, 0, MemoryMappedFileAccess.ReadWrite);
That works fine, up to 1GB. When I try 2GB, I get an IOException
:
Not enough storage is available to process this command.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.MemoryMappedFiles.MemoryMappedView.CreateView(SafeMemoryMappedFileHandle memMappedFileHandle, MemoryMappedFileAccess access, Int64 offset, Int64 size)
at System.IO.MemoryMappedFiles.MemoryMappedFile.CreateViewAccessor(Int64 offset, Int64 size, MemoryMappedFileAccess access)
at (my code here)
I have a 64-bit version of Windows 7, the app is running as 64-bit, I have 6GB of RAM. All of that should be irrelevant, as far as I can tell though. These are large amounts of data, yes, however as I understand it the MemoryMappedFile
and associated classes are the way to deal with large amounts of data like this.
As per the documentation, http://msdn.microsoft.com/en-us/library/dd267577.aspx, IOException literally means, 'An I/O error occurred'. However, the file is on disk just fine.
The app regularly increases file size as needed as mentioned, and in fact the error occurs at some point fairly randomly between ~400MB and ~2GB. When starting with 1GB, that always succeeds. When starting at the default 1MB, it fails much sooner, presumably due to releasing and re-allocating resources. (I always Flush
and Close
on the view, MMF and and streams).
I need to randomly-access the whole range of data. I am hoping that I don't need to dynamically maintain a dictionary of MemoryMappedViewAccessor
objects - my interpretation on the virtual memory system used here would suggest that pages from a file of any size would be paged in and out as necessary by the memory system in Windows.
In the form of a question: why is this happening? How can I stop it? Is there a better way to achieve full, random, read-write access to files of any size? Up to 100GB for instance?
The app was indeed set to target x86 rather than x64 in the specific project build configuration I had selected.
My guess is my process address space had become full, since it was running in 32-bit mode.
Solution - change platform target to x64 and run on a 64-bit OS.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With