I had some code which worked on colab (gpu runtime) just a short while ago. Suddenly I am getting
The NVIDIA driver on your system is too old (found version 10010).
nvcc shows Cuda compilation tools, release 10.1, V10.1.243
I tried torch versions 1.5.1, then 1.13.0. Both keep getting this error.
There is a discussion showing other people having doubts. with no clear resolution. https://github.com/pytorch/pytorch/issues/27738
Anyone having the same problem?
The light-the-torch package is designed to solve exactly this type of issue. Try this:
!pip install light-the-torch
!ltt install torch torchvision
I think this might be to do with the fact that Google Colab randomly connects you to a GPU when you start a runtime. Some might have different drivers installed, which could result in that error to display only part of the time, as you've experienced.
You can see the current version of CUDA by running !nvidia-smi
in Colab. You can then simply install a version of PyTorch that is compatible with this version of CUDA. The PyTorch website can generate a pip command for your language/environment/CUDA version, and there is also a list of previous versions and their corresponding commands if you have a CUDA version that the current version doesn't support.
This is what I got working with a CUDA version of 10.1:
!pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 -f https://download.pytorch.org/whl/torch_stable.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With