I am trying to use GPU in google colab. Below are the details of the versions of pytorch and cuda installed in my colab.
Torch 1.3.1 CUDA 10.1.243
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
I am pretty new to using a GPU for transfer learning on pytorch models. My torch.cuda.is_available() returns false and I am unabel to use a GPU. torch.backends.cudnn.enabled returns true. What might be going wrong here?
Make sure your Hardware accelerator is set to GPU.
Runtime > Change runtime type > Hardware Accelerator
In case anyone else comes here and makes the same mistake I was making:
If you are trying to check if GPU is available and you do:
if torch.cuda.is_available:
print('GPU available')
else:
print('Please set GPU via Edit -> Notebook Settings.')
It will always seem that GPU is available. Note you need to use torch.cuda.is_available()
not torch.cuda.is_available
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With