Google colab brings TPUs in the Runtime Accelerator. I found an example, How to use TPU in Official Tensorflow github. But the example not worked on google-colaboratory. It stuck on following line:
When I print available devices on colab it return
 for TPU accelerator. Does anyone knows how to use TPU on colab?
Here's a Colab-specific TPU example: https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/shakespeare_with_tpu_and_keras.ipynb
The key lines are those to connect to the TPU itself:
# This address identifies the TPU we'll use when configuring TensorFlow. TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR'] ... tpu_model = tf.contrib.tpu.keras_to_tpu_model( training_model, strategy=tf.contrib.tpu.TPUDistributionStrategy( tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER)))
(Unlike a GPU, use of the TPU requires an explicit connection to the TPU worker. So, you'll need to tweak your training and inference definition in order to observe a speedup.)