This morning I stumbled across a surprising number of page faults where I did not expect them. Yes, I probably should not worry, but it still strikes me odd, because in my understanding they should not happen. And, I'd like better if they didn't.
The application (under WinXP Pro 32bit) reserves a larger section (1GB) of address space with VirtualAlloc(MEM_RESERVE)
and later allocates moderately large blocks (20-50MB) of memory with VirtualAlloc(MEM_COMMIT)
. This is done in a worker ahead of time, the intent being to stall the main thread as little as possible. Obviously, you cannot ever assure that no page faults happen unless the memory region is currently locked, but a few of them are certainly tolerable (and unavoidable). Surprisingly every single page faults. Always.
The assumption was thus that the system only creates pages lazily after allocating them, which somehow makes sense too (although the documentation suggests something different). Fair enough, my bad.
The obvious workaround is therefore VirtualLock
/VirtualUnlock
, which forces the system to create those pages, as they must exist after VirtualLock
returns. Surprisingly, still every single page faults.
So I wrote a little test program which did all above steps in sequence, sleeping 5 seconds in between each, to rule out something was wrong in the other code. The results were:
MEM_RESERVE
1GB ---> success, zero CPU, zero time, nothing happensMEM_COMMIT
1 GB ---> success, zero CPU, zero time, working set increases by 2MB, 512 page faults (respectively 8 bytes of metadata allocated in user space per page)for(... += 128kB) { VirtualLock(128kB); VirtualUnlock(128kB); }
---> success, zero CPU, zero time, nothing happensfor(... += 4096) *addr = 0;
---> 262144 page faults, about 0.25 seconds (~95% kernel time). 1GB increase for both "working set" and "physical" inside Process ExplorerVirtualFree
---> zero CPU, zero time, both "working set" and "physical" instantly go * poof *.My expectation was that since each page had been locked once, it must physically exist at least after that. It might of course still be moved in and out of the WS as the quota is exceeded (merely changing one reference as long as sufficient RAM is available). Yet, neither the execution time, nor the working set, nor the physical memory metrics seem to support this. Rather, as it looks, each single accessed page is created upon faulting, even if it had been locked previously. Of course I can touch every page manually in a worker thread, but there must be a cleaner way too?
Am I making a wrong assumption about what VirtualLock
should do or am I not understanding something right about virtual memory? Any idea about how to tell the OS in a "clean, legitimate, working" way that I'll be wanting memory, and I'll be wanting it for real?
UPDATE:
In reaction to Harry Johnston's suggestion, I tried the somewhat problematic approach of actually calling VirtualLock
on a gigabyte of memory. For this to succeed, you must first set the process' working set size accordingly, since the default quotas are 200k/1M, which means VirtualLock
cannot possibly lock a region larger than 200k (or rather, it cannot lock more than 200k alltogether, and that is minus what is already locked for I/O or for another reason).
After setting a minimum working set size of 1GB and a maximum of 2GB, all the page faults happen the moment VirtualAlloc(MEM_COMMIT)
is called. "Virtual size" in Process Explorer jumps up by 1GB instantly. So far, it looked really, really good.
However, looking closer, "Physical" remains as it is, actual memory is really only used the moment you touch it.
VirtualLock
remains a no-op (fault-wise), but raising the minimum working set size kind of got closer to the goal.
There are two problems with tampering the WS size, however. First, you're generally not meant to have a gigabyte of minimum working set in a process, because the OS tries hard to keep that amount of memory locked. This would be acceptable in my case (it's actually more or less just what I ask for).
The bigger problem is that SetProcessWorkingSetSize
needs the the PROCESS_SET_QUOTA
access right, which is no problem as "administrator", but it fails when you run the program as a restricted user (for a good reason), and it triggers the "allow possibly harmful program?" alert of some well-known Russian antivirus software (for no good reason, but alas, you can't turn it off).
Technically VirtualLock is a hint, and so the OS is allowed to ignore it. It's backed by the NtLockVirtualMemory syscall which on Reactos/Wine is implemented as a no-op, however Windows does back the syscall with real work (MiLockVadRange).
VirtualLock isn't guarranteed to succeed. Calls to this function require the SE_LOCK_MEMORY_PRIVILEGE to work, and the addresses must fulfil security and quota restrictions. Additionally after a VirtualUnlock, the kernel is no longer obliged to keep your page in memory, so a page fault after that is a valid action.
And as Raymond Chen points out, when you unlock the memory it can formally release the page. This means that the next VirtualLock on the next page might obtain that very same page again, so when you touch the original page you'll still get a page-fault.
VirtualLock
remains a no-op (fault-wise)
I tried to reproduce this, but it worked as one might expect. Running the example code shown at the bottom of this post:
VirtualAlloc
with MEM_COMMIT
2500 MB of RAM (2 page faults)VirtualLock
all of that (about 641,250 page faults)This all works pretty much as expected. 2500 MB of RAM is 640,000 pages. The numbers add up. Also, as far as the OS-wide RAM counters go, commit charge goes up at VirtualAlloc
, while physical memory usage goes up at VirtualLock
.
So VirtualLock
is most definitely not a no-op on my Win7 x64 machine. If I don't do it, the page faults, as expected, shift to where I start writing to the RAM. They still total just over 640,000. Plus, the first time the memory is written to takes longer.
Rather, as it looks, each single accessed page is created upon faulting, even if it had been locked previously.
This is not wrong. There is no guarantee that accessing a locked-then-unlocked page won't fault. You lock it, it gets mapped to physical RAM. You unlock it, and it's free to be unmapped instantly, making a fault possible. You might hope it will stay mapped, but no guarantees...
For what it's worth, on my system with a few gigabytes of physical RAM free, it works the way you were hoping for: even if I follow my VirtualLock
with an immediate VirtualUnlock
and set the minimum working set size back to something small, no further page faults occur.
Here's what I did. I ran the test program (below) with and without the code that immediately unlocks the memory and restores a sensible minimum working set size, and then forced physical RAM to run out in each scenario. Before forcing low RAM, neither program gets any page faults. After forcing low RAM, the program that keeps the memory locked retains its huge working set and has no further page faults. The program that unlocked the memory, however, starts getting page faults.
This is easiest to observe if you suspend the process first, since otherwise the constant memory writes keep it all in the working set even if the memory isn't locked (obviously a desirable thing). But suspend the process, force low RAM, and watch the working set shrink only for the program that has unlocked the RAM. Resume the process, and witness an avalanche of page faults.
In other words, at least in Win7 x64 everything works exactly as you expected it to, using the code supplied below.
There are two problems with tampering the WS size, however. First, you're generally not meant to have a gigabyte of minimum working set in a process
Well... if you want to VirtualLock
, you are already tampering with it. The only thing that SetProcessWorkingSetSize
does is allow you to tamper with it. It doesn't degrade performance by itself; it's VirtualLock
that does - but only if the system actually runs low on physical RAM.
Here's the complete program:
#include <stdio.h>
#include <tchar.h>
#include <Windows.h>
#include <iostream>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
SIZE_T chunkSize = 2500LL * 1024LL * 1024LL; // 2,626,568,192 = 640,000 pages
int sleep = 5000;
Sleep(sleep);
cout << "Setting working set size... ";
if (!SetProcessWorkingSetSize(GetCurrentProcess(), chunkSize + 5001001L, chunkSize * 2))
return -1;
cout << "done" << endl;
Sleep(sleep);
cout << "VirtualAlloc... ";
UINT8* data = (UINT8*) VirtualAlloc(NULL, chunkSize, MEM_COMMIT, PAGE_READWRITE);
if (data == NULL)
return -2;
cout << "done" << endl;
Sleep(sleep);
cout << "VirtualLock... ";
if (VirtualLock(data, chunkSize) == 0)
return -3;
//if (VirtualUnlock(data, chunkSize) == 0) // enable or disable to experiment with unlocks
// return -3;
//if (!SetProcessWorkingSetSize(GetCurrentProcess(), 5001001L, chunkSize * 2))
// return -1;
cout << "done" << endl;
Sleep(sleep);
cout << "Writes to the memory... ";
while (true)
{
int* end = (int*) (data + chunkSize);
for (int* d = (int*) data; d < end; d++)
*d = (int) d;
cout << "done ";
}
return 0;
}
Note that this code puts the thread to sleep after VirtualLock
. According to a 2007 post by Raymond Chen, the OS is free to page it all out of physical RAM at this point and until the thread wakes up again. Note also that MSDN claims otherwise, saying that this memory will not be paged out, regardless of whether all threads are sleeping or not. On my system, they certainly remain in the physical RAM while the only thread is sleeping. I suspect Raymond's advice applied in 2007, but is no longer true in Win7.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With