Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TensorFlow: How to predict from a SavedModel?

I have exported a SavedModel and now I with to load it back in and make a prediction. It was trained with the following features and labels:

F1 : FLOAT32
F2 : FLOAT32
F3 : FLOAT32
L1 : FLOAT32

So say I want to feed in the values 20.9, 1.8, 0.9 get a single FLOAT32 prediction. How do I accomplish this? I have managed to successfully load the model, but I am not sure how to access it to make the prediction call.

with tf.Session(graph=tf.Graph()) as sess:
    tf.saved_model.loader.load(
        sess,
        [tf.saved_model.tag_constants.SERVING],
        "/job/export/Servo/1503723455"
    )

    # How can I predict from here?
    # I want to do something like prediction = model.predict([20.9, 1.8, 0.9])

This question is not a duplicate of the question posted here. This question focuses on a minimal example of performing inference on a SavedModel of any model class (not just limited to tf.estimator) and the syntax of specifying input and output node names.

like image 602
jshapy8 Avatar asked Aug 26 '17 23:08

jshapy8


People also ask

What is TensorFlow SavedModel?

A SavedModel contains a complete TensorFlow program, including trained parameters (i.e, tf. Variable s) and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying with TFLite, TensorFlow. js, TensorFlow Serving, or TensorFlow Hub.


2 Answers

Assuming you want predictions in Python, SavedModelPredictor is probably the easiest way to load a SavedModel and get predictions. Suppose you save your model like so:

# Build the graph
f1 = tf.placeholder(shape=[], dtype=tf.float32)
f2 = tf.placeholder(shape=[], dtype=tf.float32)
f3 = tf.placeholder(shape=[], dtype=tf.float32)
l1 = tf.placeholder(shape=[], dtype=tf.float32)
output = build_graph(f1, f2, f3, l1)

# Save the model
inputs = {'F1': f1, 'F2': f2, 'F3': f3, 'L1': l1}
outputs = {'output': output_tensor}
tf.contrib.simple_save(sess, export_dir, inputs, outputs)

(The inputs can be any shape and don't even have to be placeholders nor root nodes in the graph).

Then, in the Python program that will use the SavedModel, we can get predictions like so:

from tensorflow.contrib import predictor

predict_fn = predictor.from_saved_model(export_dir)
predictions = predict_fn(
    {"F1": 1.0, "F2": 2.0, "F3": 3.0, "L1": 4.0})
print(predictions)

This answer shows how to get predictions in Java, C++, and Python (despite the fact that the question is focused on Estimators, the answer actually applies independently of how the SavedModel is created).

like image 63
rhaertel80 Avatar answered Sep 19 '22 05:09

rhaertel80


For anyone who needs a working example of saving a trained canned model and serving it without tensorflow serving ,I have documented here https://github.com/tettusud/tensorflow-examples/tree/master/estimators

  1. You can create a predictor from tf.tensorflow.contrib.predictor.from_saved_model( exported_model_path)
  2. Prepare input

    tf.train.Example( 
        features= tf.train.Features(
            feature={
                'x': tf.train.Feature(
                     float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5])
                )     
            }
        )    
    )
    

Here x is the name of the input that was given in input_receiver_function at the time of exporting. for eg:

feature_spec = {'x': tf.FixedLenFeature([4],tf.float32)}

def serving_input_receiver_fn():
    serialized_tf_example = tf.placeholder(dtype=tf.string,
                                           shape=[None],
                                           name='input_tensors')
    receiver_tensors = {'inputs': serialized_tf_example}
    features = tf.parse_example(serialized_tf_example, feature_spec)
    return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
like image 20
sudharsan tk Avatar answered Sep 21 '22 05:09

sudharsan tk