Tensorflow documentation has the following example code on finding out the device placement of nodes. That is, on which device a particular computation takes place.
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print sess.run(c)
For me, the code does not print out the locations of the devices like it is supposed to. I'm using the Jupyter notebook running on Ubuntu. How might I fix this or find out the information some other way?
For Jupyter (and other) users, there is a recently-added feature that makes it possible to read back the device placement when you make a Session.run()
call and print it in your notebook.
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session()
# Runs the op.
options = tf.RunOptions(output_partition_graphs=True)
metadata = tf.RunMetadata()
c_val = sess.run(c, options=options, run_metadata=metadata)
print metadata.partition_graphs
The metadata.partition_graphs
contains the actual nodes of the graph that executed, partitioned by device. The partitions aren't explicitly labeled with the device they represent, but every NodeDef
in the graph has its device
field set.
I can see the device mapping printed to standard out of the jupyter notebook process in the terminal. It just doesn't get printed in the notebook.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With