Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow get the default device name

Tags:

tensorflow

In Tensorflow 1.14 I'm trying to use tf.data.experimental.prefetch_to_device(device=...) to prefetch my data to the GPU. But I'm not always training on a GPU, I often times train on a CPU (especially during development).

Is there a way to get the current default device in use? Tensorflow either picks the CPU (when I set CUDA_VISIBLE_DEVICES=-1) otherwise it'll pick the GPU, the default usually works.

So far I can only find a way to list visible devices with sess.list_devices(), but there must be a way to query the current default device so I don't have to manually change it in prefetch_to_device every time, right?

like image 667
David Parks Avatar asked Mar 04 '23 10:03

David Parks


1 Answers

There is no API way of doing what you said currently. The closest

device = 'gpu:0' if tf.test.is_gpu_available() else 'cpu'

is what you have already said.

The reasons I think so is that the allocation is done at a low level: https://github.com/tensorflow/tensorflow/blob/cf4dbb45ffb4d6ea0dc9c2ecfb514e874092cd16/tensorflow/core/common_runtime/colocation_graph.cc

Maybe you can also try with soft placement

Hope it helps.

like image 172
eugen Avatar answered Mar 07 '23 04:03

eugen