I would like to change the input and output signatures of the model saved, I used tf.Module objects to build the operations of the main model.
class Generator(tf.Module):
def __init__(....):
super(Generator, self).__init__(name=name)
...
with self.name_scope:
...
@tf.Module.with_name_scope
def __call__(self, input):
...
@tf.function
def serve_function(self, input):
out = self.__call__(input)
return out
call = model.Generator.serve_function.get_concrete_function(tf.TensorSpec([None, 256, 256, 3], tf.float32))
tf.saved_model.save(model.Generator, os.path.join(train_log_dir, 'frozen'))
then I am loading the model but I have as signatures 'default_serving' and 'output_0', how can I change this?
TensorFlow Lite supports converting TensorFlow model's input/output specifications to TensorFlow Lite models. The input/output specifications are called "signatures". Signatures can be specified when building a SavedModel or creating concrete functions.
Exports a tf. Module (and subclasses) obj to SavedModel format.
A SavedModel contains a complete TensorFlow program, including trained parameters (i.e, tf. Variable s) and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying with TFLite, TensorFlow. js, TensorFlow Serving, or TensorFlow Hub.
The model signature is the set of data attributes that are used to build a model. Some or all of the attributes in the signature must be present for scoring.
I figured out a way to define the output signature without using tf.Module by defining a tf.function
that returns a dictionary of outputs where the keys used in the dictionary will be the output names.
# Create the model
model = ...
# Train the model
model.fit(...)
# Define where to save the model
export_path = "..."
@tf.function()
def my_predict(my_prediction_inputs):
inputs = {
'my_serving_input': my_prediction_inputs,
}
prediction = model(inputs)
return {"my_prediction_outputs": prediction}
my_signatures = my_predict.get_concrete_function(
my_prediction_inputs=tf.TensorSpec([None,None], dtype=tf.dtypes.float32, name="my_prediction_inputs")
)
# Save the model.
tf.saved_model.save(
model,
export_dir=export_path,
signatures=my_signatures
)
This produces the following signature:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['my_prediction_inputs'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1)
name: serving_default_my_prediction_inputs:0
The given SavedModel SignatureDef contains the following output(s):
outputs['my_prediction_outputs'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With