If I save my model using the tensorflow.saved_model.save
function in SavedModel format, how can I retrieve which Tensorflow Ops are used in this model afterwards. As the model can be restored, these operations are stored in the graph, my guess is in the saved_model.pb
file. If I load this protobuf (so not the entire model) the library part of the protobuf lists these, but this is not documented and tagged as an experimental feature for now. Models created in Tensorflow 1.x won't have this part.
So what is a fast and reliable way to retrieve a list of used Operations (Like MatchingFiles
or WriteFile
) from a model in SavedModel format?
Right now I can freeze the entire thing, like tensorflowjs-converter
does. As they also check for supported Operations. This currently does not work when an LSTM is in the model, see here. Is there a better way to do this, as the Ops are definitely in there?
An example model:
class FileReader(tf.Module):
@tf.function(input_signature=[tf.TensorSpec(name='filename', shape=[None], dtype=tf.string)])
def read_disk(self, file_name):
input_scalar = tf.reshape(file_name, [])
output = tf.io.read_file(input_scalar)
return tf.stack([output], name='content')
file_reader = FileReader()
tf.saved_model.save(file_reader, 'file_reader')
Expected in output all Ops, containing in this case at least:
ReadFile
as described here
The saved_model. pb file stores the actual TensorFlow program, or model, and a set of named signatures, each identifying a function that accepts tensor inputs and produces tensor outputs.
Call tf.keras.Model.save to save a model's architecture, weights, and training configuration in a single file/folder . This allows you to export a model so it can be used without access to the original Python code*.
The model restoring is done using the tf. saved_model. loader and restores the saved variables, signatures, and assets in the scope of a session.
If saved_model.pb
is a SavedModel
protobuf message, then you get the operations directly from there. Let's say we create a model as follows:
import tensorflow as tf
class FileReader(tf.Module):
@tf.function(input_signature=[tf.TensorSpec(name='filename', shape=[None], dtype=tf.string)])
def read_disk(self, file_name):
input_scalar = tf.reshape(file_name, [])
output = tf.io.read_file(input_scalar)
return tf.stack([output], name='content')
file_reader = FileReader()
tf.saved_model.save(file_reader, 'tmp')
We can now find the operations used by that model like this:
from tensorflow.core.protobuf.saved_model_pb2 import SavedModel
saved_model = SavedModel()
with open('tmp/saved_model.pb', 'rb') as f:
saved_model.ParseFromString(f.read())
model_op_names = set()
# Iterate over every metagraph in case there is more than one
for meta_graph in saved_model.meta_graphs:
# Add operations in the graph definition
model_op_names.update(node.op for node in meta_graph.graph_def.node)
# Go through the functions in the graph definition
for func in meta_graph.graph_def.library.function:
# Add operations in each function
model_op_names.update(node.op for node in func.node_def)
# Convert to list, sorted if you want
model_op_names = sorted(model_op_names)
print(*model_op_names, sep='\n')
# Const
# Identity
# MergeV2Checkpoints
# NoOp
# Pack
# PartitionedCall
# Placeholder
# ReadFile
# Reshape
# RestoreV2
# SaveV2
# ShardedFilename
# StatefulPartitionedCall
# StringJoin
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With