I'm developing an application targeted for desktop systems which may have as little as 256MB RAM (Windows 2000 and up). In my application I have this large file (>256MB) which contains fixed records of about 160 bytes/each. This application has a rather lengthy process in which, over time, it will be randomly accessing about 90% of the file (for reading and writing). Any given record write will not be more than 1,000 record accesses away from the read of that particular record (I can tune this value).
I have two obvious options for this process: regular I/O (FileRead, FileWrite) and memory mapping (CreateFileMapping, MapViewOfFile). The latter should be much more efficient in systems with enough memory, but in systems with low memory it will swap out most of other applications' memory, which in my application is a no-no. Is there a way to keep the process from eating up all memory (e.g., like forcing the flushing of memory pages I'm no longer accessing)? If this is not possible, then I must resort back to regular I/O; I would have liked to use overlapped I/O for the writing part (since access is so random), but documentation says writes of less than 64K are always served synchronously.
Any ideas for improving I/O are welcomed.
I finally found the way, derived from a thread here. The trick is using VirtualUnlock() on the ranges I need to uncommit; although this function returns FALSE with error 0x9e ("The segment is already unlocked"), memory is actually released, even if the pages were modified (file is correctly updated).
Here's my sample test program:
#include "stdafx.h"
void getenter(void)
{
int ch;
for(;;)
{
ch = getch();
if( ch == '\n' || ch == '\r' ) return;
}
}
int main(int argc, char* argv[])
{
char* fname = "c:\\temp\\MMFTest\\TestFile.rar"; // 54 MB
HANDLE hfile = CreateFile( fname, GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_FLAG_RANDOM_ACCESS, NULL );
if( hfile == INVALID_HANDLE_VALUE )
{
fprintf( stderr, "CreateFile() error 0x%08x\n", GetLastError() );
getenter();
return 1;
}
HANDLE map_handle = CreateFileMapping( hfile, NULL, PAGE_READWRITE | SEC_RESERVE, 0, 0, 0);
if( map_handle == NULL )
{
fprintf( stderr, "CreateFileMapping() error 0x%08x\n", GetLastError() );
getenter();
CloseHandle(hfile);
return 1;
}
char* map_ptr = (char*) MapViewOfFile( map_handle, FILE_MAP_WRITE | FILE_MAP_READ, 0, 0, 0 );
if( map_ptr == NULL )
{
fprintf( stderr, "MapViewOfFile() error 0x%08x\n", GetLastError() );
getenter();
CloseHandle(map_handle);
CloseHandle(hfile);
return 1;
}
// Memory usage here is 704KB
printf("Mapped.\n"); getenter();
for( int n = 0 ; n < 10000 ; n++ )
{
map_ptr[n*4096]++;
}
// Memory usage here is ~40MB
printf("Used.\n"); getenter();
if( !VirtualUnlock( map_ptr, 5000 * 4096 ) )
{
// Memory usage here is ~20MB
// 20MB already freed!
fprintf( stderr, "VirtualUnlock() error 0x%08x\n", GetLastError() );
getenter();
UnmapViewOfFile(map_ptr);
CloseHandle(map_handle);
CloseHandle(hfile);
return 1;
}
// Code never reached
printf("VirtualUnlock() executed.\n"); getenter();
UnmapViewOfFile(map_ptr);
CloseHandle(map_handle);
CloseHandle(hfile);
printf("Unmapped and closed.\n"); getenter();
return 0;
}
As you can see, the working set of the program is reduced after executing VirtualUnlock(), just as I needed. I only need to keep track of the pages I change in order to unlock as appropriate.
Just map the whole file to memory. This consumes virtual but not physical memory. The file is read from disk piecewise and is evicted from memory by the same policies that govern the swap file.
VirtualUnlock does not appear to work. What you need to do is call FlushViewOfFile(map_ptr,0) immediately before UnmapViewOfFile(map_ptr). Windows Task Manager will not show the physical memory usage. Use ProcessExplorer from SysInternals
Are you mapping the whole file as one block with MapViewOfFile? If you are, try mapping smaller parts. You can flush a view with FlushViewOfFile()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With