We see this quite often in many of the TensorFlow tutorials:
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True))
What does allow_soft_placement
and log_device_placement
mean?
In simple words: allow_soft_placement allows dynamic allocation of GPU memory, log_device_placement prints out device information. Follow this answer to receive notifications.
ConfigProto() to configure the session. It can also take in parameters when running tasks by setting environmental variable CUDA_VISIBLE_DEVICES. -1 is set to not use GPU. The id maps to the ones shown in nvidia-smi command.
If you look at the API of ConfigProto, on line 278, you will see this:
// Whether soft placement is allowed. If allow_soft_placement is true, // an op will be placed on CPU if // 1. there's no GPU implementation for the OP // or // 2. no GPU devices are known or registered // or // 3. need to co-locate with reftype input(s) which are from CPU. bool allow_soft_placement = 7;
What this really means is that if you do something like this without allow_soft_placement=True
, TensorFlow will throw an error.
with tf.device('/gpu:0'): # some op that doesn't have a GPU implementation
Right below it, you will see on line 281:
// Whether device placements should be logged. bool log_device_placement = 8;
When log_device_placement=True
, you will get a verbose output of something like this:
2017-07-03 01:13:59.466748: I tensorflow/core/common_runtime/simple_placer.cc:841] Placeholder_1: (Placeholder)/job:localhost/replica:0/task:0/cpu:0 Placeholder: (Placeholder): /job:localhost/replica:0/task:0/cpu:0 2017-07-03 01:13:59.466765: I tensorflow/core/common_runtime/simple_placer.cc:841] Placeholder: (Placeholder)/job:localhost/replica:0/task:0/cpu:0 Variable/initial_value: (Const): /job:localhost/replica:0/task:0/cpu:0 2017-07-03 01:13:59.466783: I tensorflow/core/common_runtime/simple_placer.cc:841] Variable/initial_value: (Const)/job:localhost/replica:0/task:0/cpu:0
You can see where each operation is mapped to. For this case, they are all mapped to /cpu:0
, but if you're in a distributed setting, there would be many more devices.
In addition to comments in tensorflow/core/protobuf/config.proto (allow_soft_placement, log_device_placement) it is also explained in TF's using GPUs tutorial.
To find out which devices your operations and tensors are assigned to, create the session with
log_device_placement
configuration option set to True.
Which is helpful for debugging. For each of the nodes of your graph, you will see the device it was assigned to.
If you would like TensorFlow to automatically choose an existing and supported device to run the operations in case the specified one doesn't exist, you can set
allow_soft_placement
to True in the configuration option when creating the session.
Which will help you if you accidentally manually specified the wrong device or a device which does not support a particular op. This is useful if you write a code which can be executed in environments you do not know. You still can provide useful defaults, but in the case of failure a graceful fallback.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With