I am working in Delphi 5 (with FastMM installed) on a Win32 project, and have recently been trying to drastically reduce the memory usage in this application. So far, I have cut the usage nearly in half, but noticed something when working on a separate task. When I minimized the application, the memory usage shrunk from 45 megs down to 1 meg, which I attributed to it paging out to disk. When I restored it and restarted working, the memory went up only to 15 megs. As I continued working, the memory usage slowly went up again, and a minimize and restore flushed it back down to 15 megs. So to my thinking, when my code tells the system to release the memory, it is still being held on to according to Windows, and the actual garbage collection doesn't kick in until a lot later.
Can anyone confirm/deny this sort of behavior? Is it possible to get the memory cleaned up programatically? If I keep using the program without doing this manual flush, I get an out of memory error after a while, and would like to eliminate that. Thanks.
Edit: I found an article on about.com that gives a lot of this as well as some links and data for other areas of memory management.
To stop high RAM usage, start small by quitting programs you aren't using, restarting your computer, uninstalling unneeded programs, and scanning for any malware that may be on your device. If this doesn't work, consider downloading a RAM cleaning software or even installing more RAM.
-clear the browser cache about every other use. -Clean app cache every other week. -About every 1-2 weeks I go through and delete unused apps I have downloaded to try that I don't use. -Reboot the phone or clear non-excluded apps about every 3 days or if it starts lagging.
Task Manager doesn't show the total that the application has allocated from Windows. What it shows (by default) is the working set. The working set is a concept that's designed to try and minimize page file thrashing in memory-constrained conditions. It's basically all the pages in memory that the application touches on a regular basis, so to keep this application running with decent responsiveness, the OS will endeavour to keep the working set in physical memory.
On the theory that the user does not care much about the responsiveness of minimized applications, the OS trims their working set. This means that, under physical memory pressure, pages of virtual memory owned by that process are more likely to be paged out to disk (to the page file) to make room.
Most modern systems don't have paging issues for most applications for most of the time. A severely page-thrashing machine can be almost indistinguishable from a crashed machine, with many seconds or even minutes elapsing before applications respond to user input.
So the behaviour that you are seeing is Windows trimming the working set on minimization, and then increasing it back up over time as the application, restored, touches more and more pages. It's nothing like garbage collection.
If you're interested in memory usage by an application under Windows, there is no single most important number, but rather a range of relevant numbers:
Virtual size - this is the total amount of address space reserved by the application. Address space (i.e. what pointers point to) may be unreserved, reserved, or committed. Unreserved memory may be allocated in the future, either by a memory manager, or by loading DLLs (the DLLs have to go somewhere in memory), etc.
Private working set - this is the pages that are private to this application (i.e. are not shared across multiple running applications, such that a change to one is seen by all), and are part of the working set (i.e. are touched frequently by the app).
Shareable working set - this is the pages in the working set that are shareable, but may or may not actually be shared. For example, DLLs or packages (BPLs) may be loaded into the application's memory space. The code for these DLLs could potentially be shared across multiple processes, but if the DLL is loaded only once into a single application, then it is not actually shared. If the DLL is highly specific to this application, it is functionally equivalent to private working set.
Shared working set - this is the pages from the working set that are actually shared. One could image attributing the "cost" of these pages for any one application as the amount shared divided by the number of applications sharing the page.
Private bytes - this is the pages from the virtual address space which are committed by this application, and that aren't shared (or shareable) between applications. Pretty much every memory allocation by an application's memory manager ends up in this pool. Only pages that get used with some frequency need become part of the working set, so this number is usually larger than the private working set. A steadily increasing private bytes count indicates either a memory leak or a long-running algorithm with large space requirements.
These numbers don't represent disjoint sets. They are different ways of summarizing the states of different kinds of pages. For example, working set = private working set + shareable working set.
Which one of these numbers is most important depends on what you are constrained by. If you were trying to do I/O using memory mapped files, the virtual size will limit how much memory you can devote to the mapping. If you are in a physical-memory constrained environment, you want to minimize the working set. If you have many different instances of your application running simultaneously, you want to minimize private bytes and maximize shared bytes. If you are producing a bunch of different DLLs and BPLs, you want to be sure that they are actually shared, by making sure their load addresses don't cause them to clash and prevent sharing.
About SetProcessWorkingSetSize:
Windows usually handles the working set automatically, depending on memory pressure. The working set does not determine whether or not you're going to hit an out of memory (OOM) error. The working set used to make decisions about paging, i.e. what to keep in memory and what to leave on disk (in the case of DLLs) or page out to disk (other committed memory). It won't have any effect unless there is more virtual memory allocated than physical memory in the system.
As to its effects: if the lower bound is set high, it means the process will be hostile to other applications, and try to hog memory, in situations of physical memory pressure. This is one of the reasons why it requires a security right, PROCESS_SET_QUOTA.
If the upper bound is set low, it means that Windows won't try hard to keep pages in physical memory for this application, and that Windows may page most of it out to disk when physical memory pressure gets high.
In most situations, you don't want to change the working set details. Usually it's best to let the OS handle it. It won't prevent OOM situations. Those are usually caused by address space exhaustion, because the memory manager couldn't commit any more memory; or in systems with insufficient page file space to back committed virtual memory, when space in the page file runs out.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With