In Tensorflow 1.14 I'm trying to use tf.data.experimental.prefetch_to_device(device=...)
to prefetch my data to the GPU. But I'm not always training on a GPU, I often times train on a CPU (especially during development).
Is there a way to get the current default device in use? Tensorflow either picks the CPU (when I set CUDA_VISIBLE_DEVICES=-1
) otherwise it'll pick the GPU, the default usually works.
So far I can only find a way to list visible devices with sess.list_devices()
, but there must be a way to query the current default device so I don't have to manually change it in prefetch_to_device
every time, right?
There is no API way of doing what you said currently. The closest
device = 'gpu:0' if tf.test.is_gpu_available() else 'cpu'
is what you have already said.
The reasons I think so is that the allocation is done at a low level: https://github.com/tensorflow/tensorflow/blob/cf4dbb45ffb4d6ea0dc9c2ecfb514e874092cd16/tensorflow/core/common_runtime/colocation_graph.cc
Maybe you can also try with soft placement
Hope it helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With