Google colab brings TPUs in the Runtime Accelerator. I found an example, How to use TPU in Official Tensorflow github. But the example not worked on google-colaboratory. It stuck on following line:
tf.contrib.tpu.keras_to_tpu_model(model, strategy=strategy)
When I print available devices on colab it return []
for TPU accelerator. Does anyone knows how to use TPU on colab?
TPUs are Google's custom-developed application-specific integrated circuits (ASICs) used to accelerate machine learning workloads. They are available through Google Colab, the TPU Research Cloud, and Cloud TPU.
The number of TPU core available for the Colab notebooks is 8 currently. Takeaways: From observing the training time, it can be seen that the TPU takes considerably more training time than the GPU when the batch size is small. But when batch size increases the TPU performance is comparable to that of the GPU.
Here's a Colab-specific TPU example: https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/shakespeare_with_tpu_and_keras.ipynb
The key lines are those to connect to the TPU itself:
# This address identifies the TPU we'll use when configuring TensorFlow.
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
...
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
training_model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER)))
(Unlike a GPU, use of the TPU requires an explicit connection to the TPU worker. So, you'll need to tweak your training and inference definition in order to observe a speedup.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With