According to Wikipedia:
A page fault is a trap to the software raised by the hardware when a program accesses a page that is mapped in the virtual address space, but not loaded in physical memory. (emphasis mine)
Okay, that makes sense.
But if that's the case, why is it that whenever the process information in Process Hacker is refreshed, I see about 15 page faults?
Or in other words, why is any memory getting paged out? (I have no idea if it's user or kernel memory.) I have no page file, and the RAM usage is about 1.2 GB out of 4 GB, which is after a clean reboot. There's no shortage of any resource; why would anything get paged out?
Invalid conditions and impacts of page faults The most common condition is when an application attempts to access memory at a location outside of its allocated address space. A second condition occurs when the operating system needs more physical memory than is available in the computer's main memory.
Valid page faults are common and necessary to increase the amount of memory available to programs in any operating system that utilizes virtual memory, such as Windows, macOS, and the Linux kernel.
You should try to keep code that can be modified and code that cannot be modified in separate sections of a large program. This will reduce page traffic by reducing the number of pages that are changed. Also, try to prevent I/O buffers from crossing page boundaries unnecessarily.
(I'm the author of Process Hacker.)
Firstly:
A page fault is a trap to the software raised by the hardware when a program accesses a page that is mapped in the virtual address space, but not loaded in physical memory.
That's not entirely correct, as explained later in the same article (Minor page fault). There are soft page faults, where all the kernel needs to do is add a page to the working set of the process. Here's a table from the Windows Internals book (I've excluded the ones that result in an access violation):
Reason for Fault | Result |
---|---|
Accessing a page that isn’t resident in memory but is on disk in a page file or a mapped file | Allocate a physical page, and read the desired page from disk and into the relevant working set |
Accessing a page that is on the standby or modified list | Transition the page to the relevant process, session, or system working set |
Accessing a demand-zero page | Add a zero-filled page to the relevant working set |
Writing to a copy-on-write page | Make process-private (or session-private) copy of page, and replace original in process or system working set |
Page faults can occur for a variety of reasons, as you can see above. Only one of them has to do with reading from the disk. If you try to allocate a block from the heap and the heap manager allocates new pages, then accesses those pages, you'll get a demand-zero page fault. If you try to hook a function in kernel32 by writing to kernel32's pages, you'll get a copy-on-write fault because those pages are silently being copied so your changes don't affect other processes.
Now to answer your question more specifically: Process Hacker only seems to have page faults when updating its service information - that is, when it calls EnumServicesStatusEx, which RPCs to the SCM (services.exe). My guess is that in the process, a lot of memory is being allocated, leading to demand-zero page faults (the service information requires several pages to store, IIRC).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With