This is more of a miscellaneous question but is there a feature to do this? I just simply can't find any documentation on this topic. For instance, my usage would be like this:
from google.colab import utils # I made this up
colab_pro = utils.colab_is_pro()
if colab_pro:
# train model with higher settings
else:
# train model with lower settings
Currently I do have a way of doing this, but it's rather hacky:
gpu_name = !nvidia-smi --query-gpu=gpu_name --format=csv
# You get Tesla T4 with free colab and faster GPUs with colab pro
colab_pro = False if 'T4' in gpu_name else True
FYI, here is the colab I'm working on: https://colab.research.google.com/github/Namburger/edgetpu-ssdlite-mobiledet-retrain/blob/master/ssdlite_mobiledet_transfer_learning_cat_vs_dog.ipynb#scrollTo=Mg1C8UwStK7i
#A Colab pro environment should have >20Gb of total memory.
from psutil import virtual_memory
colab_pro = virtual_memory().total / 1e9
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(colab_pro))
if colab_pro < 20:
print('Not using a high-RAM runtime')
# train model with lower settings
else:
print('You are using a high-RAM runtime!')
# train model with higher settings
Also, you can check the available memory in colab as follows:
!cat /proc/meminfo
A Colab pro environment should have >20Gb of total memory.
https://colab.research.google.com/notebooks/pro.ipynb#scrollTo=V1G82GuO-tez
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With