I know I can access the current GPU using torch.cuda.current_device()
, but how can I get a list of all the currently available GPUs?
To use data parallelism with PyTorch, you can use the DataParallel class. When using this class, you define your GPU IDs and initialize your network using a Module object with a DataParallel object. Then, when you call your object it can split your dataset into batches that are distributed across your defined GPUs.
Check GPU Availability The easiest way to check if you have access to GPUs is to call torch. cuda. is_available(). If it returns True, it means the system has the Nvidia driver correctly installed.
You can list all the available GPUs by doing:
>>> import torch
>>> available_gpus = [torch.cuda.device(i) for i in range(torch.cuda.device_count())]
>>> available_gpus
[<torch.cuda.device object at 0x7f2585882b50>]
Check how many GPUs are available with PyTorch
import torch
num_of_gpus = torch.cuda.device_count()
print(num_of_gpus)
In case you want to use the first GPU from it.
device = 'cuda:0' if cuda.is_available() else 'cpu'
Replace 0 in the above command with another number If you want to use another GPU.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With