Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to check if pytorch is using the GPU?

How do I check if PyTorch is using the GPU? The nvidia-smi command can detect GPU activity, but I want to check it directly from inside a Python script.

like image 880
vinzee Avatar asked Jan 08 '18 14:01

vinzee


People also ask

How do I know if my GPU is being used by PyTorch?

One of the easiest way to detect the presence of GPU is to use nvidia-smi command. The NVIDIA System Management Interface (nvidia-smi) is a command line utility, intended to aid in the management and monitoring of NVIDIA GPU devices.

Does PyTorch automatically use GPU?

If you are tracking your models using Weights & Biases, all your system metrics, including GPU utilization, will be automatically logged. Some of the most important metrics logged are GPU memory allocated, GPU utilization, CPU utilization, etc.


3 Answers

These functions should help:

>>> import torch

>>> torch.cuda.is_available()
True

>>> torch.cuda.device_count()
1

>>> torch.cuda.current_device()
0

>>> torch.cuda.device(0)
<torch.cuda.device at 0x7efce0b03be0>

>>> torch.cuda.get_device_name(0)
'GeForce GTX 950M'

This tells us:

  • CUDA is available and can be used by one device.
  • Device 0 refers to the GPU GeForce GTX 950M, and it is currently chosen by PyTorch.
like image 101
vinzee Avatar answered Oct 12 '22 14:10

vinzee


As it hasn't been proposed here, I'm adding a method using torch.device, as this is quite handy, also when initializing tensors on the correct device.

# setting device on GPU if available, else CPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
print()

#Additional Info when using cuda
if device.type == 'cuda':
    print(torch.cuda.get_device_name(0))
    print('Memory Usage:')
    print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
    print('Cached:   ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB')

Edit: torch.cuda.memory_cached has been renamed to torch.cuda.memory_reserved. So use memory_cached for older versions.

Output:

Using device: cuda

Tesla K80
Memory Usage:
Allocated: 0.3 GB
Cached:    0.6 GB

As mentioned above, using device it is possible to:

  • To move tensors to the respective device:

    torch.rand(10).to(device)
    
  • To create a tensor directly on the device:

    torch.rand(10, device=device)
    

Which makes switching between CPU and GPU comfortable without changing the actual code.


Edit:

As there has been some questions and confusion about the cached and allocated memory I'm adding some additional information about it:

  • torch.cuda.max_memory_cached(device=None)

    Returns the maximum GPU memory managed by the caching allocator in bytes for a given device.

  • torch.cuda.memory_allocated(device=None)

    Returns the current GPU memory usage by tensors in bytes for a given device.


You can either directly hand over a device as specified further above in the post or you can leave it None and it will use the current_device().


Additional note: Old graphic cards with Cuda compute capability 3.0 or lower may be visible but cannot be used by Pytorch!
Thanks to hekimgil for pointing this out! - "Found GPU0 GeForce GT 750M which is of cuda capability 3.0. PyTorch no longer supports this GPU because it is too old. The minimum cuda capability that we support is 3.5."

like image 29
MBT Avatar answered Oct 12 '22 14:10

MBT


After you start running the training loop, if you want to manually watch it from the terminal whether your program is utilizing the GPU resources and to what extent, then you can simply use watch as in:

$ watch -n 2 nvidia-smi

This will continuously update the usage stats for every 2 seconds until you press ctrl+c


If you need more control on more GPU stats you might need, you can use more sophisticated version of nvidia-smi with --query-gpu=.... Below is a simple illustration of this:

$ watch -n 3 nvidia-smi --query-gpu=index,gpu_name,memory.total,memory.used,memory.free,temperature.gpu,pstate,utilization.gpu,utilization.memory --format=csv

which would output the stats something like:

enter image description here

Note: There should not be any space between the comma separated query names in --query-gpu=.... Else those values will be ignored and no stats are returned.


Also, you can check whether your installation of PyTorch detects your CUDA installation correctly by doing:

In [13]: import  torch

In [14]: torch.cuda.is_available()
Out[14]: True

True status means that PyTorch is configured correctly and is using the GPU although you have to move/place the tensors with necessary statements in your code.


If you want to do this inside Python code, then look into this module:

https://github.com/jonsafari/nvidia-ml-py or in pypi here: https://pypi.python.org/pypi/nvidia-ml-py/

like image 63
kmario23 Avatar answered Oct 12 '22 15:10

kmario23