I am running several tensorflow inferences using sess.run()
in a loop and it happens that some inferences are too heavy for my GPU.
I get errors like :
2019-05-23 15:37:49.582272: E tensorflow/core/common_runtime/executor.cc:623]
Executor failed to create kernel. Resource exhausted: OOM when allocating tensor of shape [306] and type float
I would like to be able to catch these specific OutOfMemory errors but not other errors (which may be due to a wrong input format or a corrupted graph.)
Obviously, a structure similar to :
try:
sess.run(node_output, feed_dict={node_input : value_input})
except:
do_outOfMemory_specific_stuff()
does not work since other kind of errors will lead to a call to the do_outOfMemory_specific_stuff
function.
Any idea how to catch these OutOfMemory errors ?
Limiting GPU memory growth To limit TensorFlow to a specific set of GPUs, use the tf. config. set_visible_devices method. In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as is needed by the process.
You should be able to catch it via:
...
except tf.errors.ResourceExhaustedError as e:
...
according to this documentation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With