Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Get GPU memory usage programmatically

Tags:

c++

cuda

gpu

opencl

I'm looking for a reliable way to determine current GPU memory usage preferably in C++/C . I have found many ways of obtaining usage like the following methods:

  • Direct Draw
  • DxDiag
  • WMI
  • DXGI
  • D3D9

Those methods are not accurate enough (most off by a hundred megabytes). I tried nvapi.h but I didn't see anything that I could use to query for memory. I was thinking only the methods listed above were the only options but then I ran into a tool called GPU-Z that gives me accurate memory readings to the nearest megabyte even when OpenCL runs almost full load on my 580GTX. I can verify I am at the peak of my memory usage by allocating a few more megabytes before OpenCL returns Object_Allocation fail return code.

Looking at the imports from GPU-Z, I see nothing interesting other than:

kernel32.dll: LoadLibraryA, GetProcAddress, VirtualAlloc, VirtualFree

My guess is LoadLibraryA must be used to load a dll for querying the GPU memory and sensors. If this dll exists, where does it live? I'm looking for a solution for AMD and NVidia if possible (using different APIs is ok).

like image 795
roboto1986 Avatar asked May 10 '13 13:05

roboto1986


People also ask

How do I check my GPU memory usage?

To monitor the overall GPU resource usage, click the Performance tab, scroll down the left pane, and find the “GPU” option. Here you can watch real-time usage. It displays different graphs for what is happening with your system — like encoding videos or gameplay.

How do I get graphics memory in Python?

You will need to install nvidia-ml-py3 library in python (pip install nvidia-ml-py3) which provides the bindings to NVIDIA Management library. Here is the code snippet: Thats it!

What is Cuda out of memory?

My model reports “cuda runtime error(2): out of memory” As the error message suggests, you have run out of memory on your GPU. Since we often deal with large amounts of data in PyTorch, small mistakes can rapidly cause your program to use up all of your GPU; fortunately, the fixes in these cases are often simple.

How do I limit GPU memory usage TensorFlow?

Limiting GPU memory growth To limit TensorFlow to a specific set of GPUs, use the tf. config. set_visible_devices method. In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as is needed by the process.


3 Answers

cudaMemGetInfo (documented here) requires nothing other than the cuda runtime API to get free memory and total memory on the current device.

And as Erik pointed out, there is similar functionality in NVML.

like image 109
Robert Crovella Avatar answered Oct 07 '22 07:10

Robert Crovella


Check out the function nvmlDeviceGetMemoryInfo in NVIDIA Management Library https://developer.nvidia.com/nvidia-management-library-nvml:

"Retrieves the amount of used, free and total memory available on the device, in bytes."

Don't know if AMD has something equivalent.

like image 2
Erik Smistad Avatar answered Oct 07 '22 06:10

Erik Smistad


D3DKMTQueryStatistics is what you need.

Similar question has been asked here: How to query GPU Usage in DirectX?

like image 2
Vertexwahn Avatar answered Oct 07 '22 06:10

Vertexwahn