I want to use TensorFlow 0.12 for GPU on my Ubuntu 14.04 machine.
But when assigning a device to a node I am getting the following error.
InvalidArgumentError (see above for traceback): Cannot assign a device to
node 'my_model/RNN/zeros': Could not satisfy explicit device specification
'/device:GPU:0' because no devices matching that specification are registered
in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
[[Node: my_model/RNN/zeros = Fill[T=DT_FLOAT, _device="/device:GPU:0"]
(my_model/RNN/pack, my_model/RNN/zeros/Const)]]
My tensorflow seems to be set up correctly, since this simple program works:
import tensorflow as tf
# Creates a graph.
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
Which outputs:
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA
library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:128]
successfully opened CUDA library libcudnn.so locally I
tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library
libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:128]
successfully opened CUDA library libcuda.so.1 locally I
tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library
libcurand.so locally I tensorflow/core/common_runtime/gpu/gpu_device.cc:885]
Found device 0 with properties: name: Tesla K40m major: 3 minor: 5
memoryClockRate (GHz) 0.745 pciBusID 0000:08:00.0 Total memory: 11.17GiB Free
memory:
11.10GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I
tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y I
tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow
device (/gpu:0) -> (device: 0, name: Tesla K40m, pci bus id: 0000:08:00.0)
Device mapping: /job:localhost/replica:0/task:0/gpu:0 -> device: 0, name:
Tesla K40m, pci bus id: 0000:08:00.0 I tensorflow/core/common_runtime
/direct_session.cc:255] Device mapping: /job:localhost/replica:0/task:0/gpu:0
-> device: 0, name: Tesla K40m, pci bus id: 0000:08:00.0
MatMul: (MatMul): /job:localhost/replica:0/task:0/gpu:0 I tensorflow/core
/common_runtime/simple_placer.cc:827] MatMul: (MatMul)/job:localhost/replica:0
/task:0/gpu:0 b: (Const): /job:localhost/replica:0/task:0/gpu:0 I
tensorflow/core/common_runtime/simple_placer.cc:827] b: (Const)/job:localhost
/replica:0/task:0/gpu:0 a: (Const): /job:localhost/replica:0/task:0/gpu:0 I
tensorflow/core/common_runtime/simple_placer.cc:827] a: (Const)/job:localhost
/replica:0/task:0/gpu:0 [[ 22. 28.] [ 49.
64.]]
How can I assign a device to a node correctly?
Try using sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True))
. This will resolve the problem if it couldn't place an operation on the GPU. Since some operations have only CPU implementation.
Using allow_soft_placement=True
will allow TensorFlow to fall back to CPU when no GPU implementation is available.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With