In top, I noticed that my c program (using CUDA 3.2) has a virtual size of 28g or more (looking at VIRT), on every run right from the beginning. This doesn't make ANY sense to me. The resident memory makes sense and is only around 2g on my largest data set. I know at some point in the past the virtual size was not so large, but I'm not sure when the change occurred.
Why would my process use 28g of virtual memory (or why would top's VIRT be so large)? I understand that VIRT includes the executable binary (only 437K), shared libraries, and "data area". What is the "data area"? How can I find out how much memory the shared libraries require? What about other elements of my process's total memory?
contents of /proc/< pid >/smaps (1022 lines) here: http://pastebin.com/fTJJneXr
One of the entries from smaps show that one of them accounts for MOST of it, but has no label... how can I find out what this "blank" entry is that has 28gb?
200000000-900000000 ---p 00000000 00:00 0
Size: 29360128 kB
Rss: 0 kB
Pss: 0 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 0 kB
Referenced: 0 kB
Anonymous: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
--
ubuntu 11.04 64-bit
16 GB RAM
Click Start > Settings > Control Panel. Double-click the System icon. In the System Properties dialog box, click the Advanced tab and click Performance Options. In the Performance Options dialog, under Virtual memory, click Change.
For example with 16GB, you may want to enter Initial Size of 8000 MB and Maximum size of 12000 MB.
Virtual memory facilitates the efficient usage of the finite physical memory among numerous processes, each with its own protected virtual address space. The sum of these virtual memories would typically be larger than the installed physical memory.
UVA requires CUDA to allocate enough virtual memory to map all of both GPU and system memory. Please see post #5 in the following thread on the NVIDIA forums:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With