I cannot successfully run the optimize_for_inference
module on a simple, saved TensorFlow graph (Python 2.7; package installed by pip install tensorflow-gpu==1.0.1
).
Here's my Python script to generate and save a simple graph to add 5 to my input x
placeholder
operation.
import tensorflow as tf
# make and save a simple graph
G = tf.Graph()
with G.as_default():
x = tf.placeholder(dtype=tf.float32, shape=(), name="x")
a = tf.Variable(5.0, name="a")
y = tf.add(a, x, name="y")
saver = tf.train.Saver()
with tf.Session(graph=G) as sess:
sess.run(tf.global_variables_initializer())
out = sess.run(fetches=[y], feed_dict={x: 1.0})
print(out)
saver.save(sess=sess, save_path="test_model")
I have a simple restore script that recreates the saved graph and restores graph params. Both the save/restore scripts produce the same output.
import tensorflow as tf
# Restore simple graph and test model output
G = tf.Graph()
with tf.Session(graph=G) as sess:
# recreate saved graph (structure)
saver = tf.train.import_meta_graph('./test_model.meta')
# restore net params
saver.restore(sess, tf.train.latest_checkpoint('./'))
x = G.get_operation_by_name("x").outputs[0]
y = G.get_operation_by_name("y").outputs
out = sess.run(fetches=[y], feed_dict={x: 1.0})
print(out[0])
But, while I don't expect much in terms of optimization, when I try to optimize the graph for inference, I get the following error message. The expected output node does not appear to be in the saved graph.
$ python -m tensorflow.python.tools.optimize_for_inference --input test_model.data-00000-of-00001 --output opt_model --input_names=x --output_names=y
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/{path}/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference.py", line 141, in <module>
app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "/{path}/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference.py", line 90, in main
FLAGS.output_names.split(","), FLAGS.placeholder_type_enum)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference_lib.py", line 91, in optimize_for_inference
placeholder_type_enum)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/tools/strip_unused_lib.py", line 71, in strip_unused
output_node_names)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/framework/graph_util_impl.py", line 141, in extract_sub_graph
assert d in name_to_node_map, "%s is not in graph" % d
AssertionError: y is not in graph
Further investigation led me to inspect the checkpoint of the saved graph, which only shows 1 tensor (a
, no x
and no y
).
(tf-1.0.1) $ python -m tensorflow.python.tools.inspect_checkpoint --file_name ./test_model --all_tensors
tensor_name: a
5.0
x
and y
in the checkpoint? Is it because they are operations and not tensors?optimize_for_inference
module, how do I build the graph so I can reference the input and output nodes?Here is the detailed guide on how to optimize for inference:
The optimize_for_inference
module takes a frozen binary GraphDef
file as input and outputs the optimized Graph Def
file which you can use for inference. And to get the frozen binary GraphDef file
you need to use the module freeze_graph
which takes a GraphDef proto
, a SaverDef proto
and a set of variables stored in a checkpoint file. The steps to achieve that is given below:
# make and save a simple graph
G = tf.Graph()
with G.as_default():
x = tf.placeholder(dtype=tf.float32, shape=(), name="x")
a = tf.Variable(5.0, name="a")
y = tf.add(a, x, name="y")
saver = tf.train.Saver()
with tf.Session(graph=G) as sess:
sess.run(tf.global_variables_initializer())
out = sess.run(fetches=[y], feed_dict={x: 1.0})
# Save GraphDef
tf.train.write_graph(sess.graph_def,'.','graph.pb')
# Save checkpoint
saver.save(sess=sess, save_path="test_model")
python -m tensorflow.python.tools.freeze_graph --input_graph graph.pb --input_checkpoint test_model --output_graph graph_frozen.pb --output_node_names=y
python -m tensorflow.python.tools.optimize_for_inference --input graph_frozen.pb --output graph_optimized.pb --input_names=x --output_names=y
with tf.gfile.GFile('graph_optimized.pb', 'rb') as f:
graph_def_optimized = tf.GraphDef()
graph_def_optimized.ParseFromString(f.read())
G = tf.Graph()
with tf.Session(graph=G) as sess:
y, = tf.import_graph_def(graph_def_optimized, return_elements=['y:0'])
print('Operations in Optimized Graph:')
print([op.name for op in G.get_operations()])
x = G.get_tensor_by_name('import/x:0')
out = sess.run(y, feed_dict={x: 1.0})
print(out)
#Output
#Operations in Optimized Graph:
#['import/x', 'import/a', 'import/y']
#6.0
If there are multiple output nodes, then specify : output_node_names = 'boxes, scores, classes'
and import graph by,
boxes,scores,classes, = tf.import_graph_def(graph_def_optimized, return_elements=['boxes:0', 'scores:0', 'classes:0'])
input
is a graphdef file for the script not the data part of the checkpoint. You need to freeze the model to a .pb
file/ or get the prototxt for graph and use the optimize for inference script. This script takes either a frozen binary GraphDef file (where the weight
variables have been converted into constants by the freeze_graph script), or a
text GraphDef proto file (the weight variables are stored in a separate
checkpoint file), and outputs a new GraphDef with the optimizations applied.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With