Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Get total amount of free GPU memory and available using pytorch

I'm using google colab free Gpu's for experimentation and wanted to know how much GPU Memory available to play around, torch.cuda.memory_allocated() returns the current GPU memory occupied, but how do we determine total available memory using PyTorch.

like image 295
Hari Prasad Avatar asked Oct 03 '19 09:10

Hari Prasad


People also ask

How does PyTorch allocate GPU memory?

Memory management. PyTorch uses a caching memory allocator to speed up memory allocations. This allows fast memory deallocation without device synchronizations. However, the unused memory managed by the allocator will still show as if used in nvidia-smi .

How do I get graphics memory in Python?

You will need to install nvidia-ml-py3 library in python (pip install nvidia-ml-py3) which provides the bindings to NVIDIA Management library. Here is the code snippet: Thats it!


3 Answers

PyTorch can provide you total, reserved and allocated info:

t = torch.cuda.get_device_properties(0).total_memory r = torch.cuda.memory_reserved(0) a = torch.cuda.memory_allocated(0) f = r-a  # free inside reserved 

Python bindings to NVIDIA can bring you the info for the whole GPU (0 in this case means first GPU device):

from pynvml import * nvmlInit() h = nvmlDeviceGetHandleByIndex(0) info = nvmlDeviceGetMemoryInfo(h) print(f'total    : {info.total}') print(f'free     : {info.free}') print(f'used     : {info.used}') 

pip install pynvml

You may check the nvidia-smi to get memory info. You may use nvtop but this tool needs to be installed from source (at the moment of writing this). Another tool where you can check memory is gpustat (pip3 install gpustat).

If you would like to use C++ cuda:

include <iostream> #include "cuda.h" #include "cuda_runtime_api.h"    using namespace std;    int main( void ) {     int num_gpus;     size_t free, total;     cudaGetDeviceCount( &num_gpus );     for ( int gpu_id = 0; gpu_id < num_gpus; gpu_id++ ) {         cudaSetDevice( gpu_id );         int id;         cudaGetDevice( &id );         cudaMemGetInfo( &free, &total );         cout << "GPU " << id << " memory: free=" << free << ", total=" << total << endl;     }     return 0; } 
like image 97
prosti Avatar answered Sep 23 '22 17:09

prosti


This is useful for me!

def get_memory_free_MiB(gpu_index):     pynvml.nvmlInit()     handle = pynvml.nvmlDeviceGetHandleByIndex(int(gpu_index))     mem_info = pynvml.nvmlDeviceGetMemoryInfo(handle)     return mem_info.free // 1024 ** 2 
like image 28
Peter Pack Avatar answered Sep 25 '22 17:09

Peter Pack


In the recent version of PyTorch you can also use torch.cuda.mem_get_info:

https://pytorch.org/docs/stable/generated/torch.cuda.mem_get_info.html#torch.cuda.mem_get_info

like image 32
Iman Avatar answered Sep 22 '22 17:09

Iman