I'm using Colaboratory and Pytorch to run a GAN that uses an unusual dataset, which is currently stored locally on my machine. To access these files I connected to a local runtime (as per https://research.google.com/colaboratory/local-runtimes.html). However, Colaboratory now uses my own GPU when running now, which it did not do on previous runs. I know this because current runs run much much slower, as they are using my GTX 1060 6GB instead of Colab's Tesla K80.
I checked this using
torch.cuda.get_device_name(0)
Which returns "GeForce GTX 1060 6G" when I am connected locally. This is the case even with Edit -> Notebook Settings -> Hardware Accelerator -> "GPU" selected.
However, when I am not connected locally, and instead use the (default) "Connect to hosted runtime" option,
torch.cuda.get_device_name(0)
does return "Tesla K80".
I've had trouble uploading my dataset to Drive, as it is a large image dataset, and would like to carry on using the local runtime.
How do I use both the local runtime and Colab's amazing Tesla K80? Any help would be much appreciated.
Google provides the use of free GPU for your Colab notebooks.
Choose Runtime > Change Runtime Type and set Hardware Accelerator to None. For examples of how to utilize GPU and TPU runtimes in Colab, see the Tensorflow With GPU and TPUs In Colab example notebooks.
Colab is using your GPU because you connected it to a local runtime. That's what connecting it to your own runtime means. It means that you're using your machine instead of handling the process on Google's servers. If you want to still use Google's servers and processing capabilities, I'd suggest looking into connecting your Google Drive to the Colaboratory runtime.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With