Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to import an saved Tensorflow model train using tf.estimator and predict on input data

Tags:

I have save the model using tf.estimator .method export_savedmodel as follows:

export_dir="exportModel/"  feature_spec = tf.feature_column.make_parse_example_spec(feature_columns)  input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)  classifier.export_savedmodel(export_dir, input_receiver_fn, as_text=False, checkpoint_path="Model/model.ckpt-400")  

How can I import this saved model and use for predictions?

like image 384
nayan Avatar asked Sep 07 '17 14:09

nayan


People also ask

What kind of estimator model does TensorFlow recommend using for classification?

It is recommended using pre-made Estimators when just getting started. To write a TensorFlow program based on pre-made Estimators, you must perform the following tasks: Create one or more input functions. Define the model's feature columns.

What is a TensorFlow estimator?

TensorFlow Estimator is a high-level TensorFlow API that greatly simplifies machine learning programming. Estimators encapsulate training, evaluation, prediction, and exporting for your model.


1 Answers

I tried to search for a good base example, but it appears the documentation and samples are a bit scattered for this topic. So let's start with a base example: the tf.estimator quickstart.

That particular example doesn't actually export a model, so let's do that (not need for use case 1):

def serving_input_receiver_fn():   """Build the serving inputs."""   # The outer dimension (None) allows us to batch up inputs for   # efficiency. However, it also means that if we want a prediction   # for a single instance, we'll need to wrap it in an outer list.   inputs = {"x": tf.placeholder(shape=[None, 4], dtype=tf.float32)}   return tf.estimator.export.ServingInputReceiver(inputs, inputs)  export_dir = classifier.export_savedmodel(     export_dir_base="/path/to/model",     serving_input_receiver_fn=serving_input_receiver_fn) 

Huge asterisk on this code: there appears to be a bug in TensorFlow 1.3 that doesn't allow you to do the above export on a "canned" estimator (such as DNNClassifier). For a workaround, see the "Appendix: Workaround" section.

The code below references export_dir (return value from the export step) to emphasize that it is not "/path/to/model", but rather, a subdirectory of that directory whose name is a timestamp.

Use Case 1: Perform prediction in the same process as training

This is an sci-kit learn type of experience, and is already exemplified by the sample. For completeness' sake, you simply call predict on the trained model:

classifier.train(input_fn=train_input_fn, steps=2000) # [...snip...] predictions = list(classifier.predict(input_fn=predict_input_fn)) predicted_classes = [p["classes"] for p in predictions] 

Use Case 2: Load a SavedModel into Python/Java/C++ and perform predictions

Python Client

Perhaps the easiest thing to use if you want to do prediction in Python is SavedModelPredictor. In the Python program that will use the SavedModel, we need code like this:

from tensorflow.contrib import predictor  predict_fn = predictor.from_saved_model(export_dir) predictions = predict_fn(     {"x": [[6.4, 3.2, 4.5, 1.5],            [5.8, 3.1, 5.0, 1.7]]}) print(predictions['scores']) 

Java Client

package dummy;  import java.nio.FloatBuffer; import java.util.Arrays; import java.util.List;  import org.tensorflow.SavedModelBundle; import org.tensorflow.Session; import org.tensorflow.Tensor;  public class Client {    public static void main(String[] args) {     Session session = SavedModelBundle.load(args[0], "serve").session();      Tensor x =         Tensor.create(             new long[] {2, 4},             FloatBuffer.wrap(                 new float[] {                   6.4f, 3.2f, 4.5f, 1.5f,                   5.8f, 3.1f, 5.0f, 1.7f                 }));      // Doesn't look like Java has a good way to convert the     // input/output name ("x", "scores") to their underlying tensor,     // so we hard code them ("Placeholder:0", ...).     // You can inspect them on the command-line with saved_model_cli:     //     // $ saved_model_cli show --dir $EXPORT_DIR --tag_set serve --signature_def serving_default     final String xName = "Placeholder:0";     final String scoresName = "dnn/head/predictions/probabilities:0";      List<Tensor> outputs = session.runner()         .feed(xName, x)         .fetch(scoresName)         .run();      // Outer dimension is batch size; inner dimension is number of classes     float[][] scores = new float[2][3];     outputs.get(0).copyTo(scores);     System.out.println(Arrays.deepToString(scores));   } } 

C++ Client

You'll likely want to use tensorflow::LoadSavedModel with Session.

#include <unordered_set> #include <utility> #include <vector>  #include "tensorflow/cc/saved_model/loader.h" #include "tensorflow/core/framework/tensor.h" #include "tensorflow/core/public/session.h"  namespace tf = tensorflow;  int main(int argc, char** argv) {   const string export_dir = argv[1];    tf::SavedModelBundle bundle;   tf::Status load_status = tf::LoadSavedModel(       tf::SessionOptions(), tf::RunOptions(), export_dir, {"serve"}, &bundle);   if (!load_status.ok()) {     std::cout << "Error loading model: " << load_status << std::endl;     return -1;   }    // We should get the signature out of MetaGraphDef, but that's a bit   // involved. We'll take a shortcut like we did in the Java example.   const string x_name = "Placeholder:0";   const string scores_name = "dnn/head/predictions/probabilities:0";    auto x = tf::Tensor(tf::DT_FLOAT, tf::TensorShape({2, 4}));   auto matrix = x.matrix<float>();   matrix(0, 0) = 6.4;   matrix(0, 1) = 3.2;   matrix(0, 2) = 4.5;   matrix(0, 3) = 1.5;   matrix(0, 1) = 5.8;   matrix(0, 2) = 3.1;   matrix(0, 3) = 5.0;   matrix(0, 4) = 1.7;    std::vector<std::pair<string, tf::Tensor>> inputs = {{x_name, x}};   std::vector<tf::Tensor> outputs;    tf::Status run_status =       bundle.session->Run(inputs, {scores_name}, {}, &outputs);   if (!run_status.ok()) {     cout << "Error running session: " << run_status << std::endl;     return -1;   }    for (const auto& tensor : outputs) {     std::cout << tensor.matrix<float>() << std::endl;   } } 

Use Case 3: Serve a model using TensorFlow Serving

Exporting models in a manner amenable to serving a Classification model requires that the input be a tf.Example object. Here's how we might export a model for TensorFlow serving:

def serving_input_receiver_fn():   """Build the serving inputs."""   # The outer dimension (None) allows us to batch up inputs for   # efficiency. However, it also means that if we want a prediction   # for a single instance, we'll need to wrap it in an outer list.   example_bytestring = tf.placeholder(       shape=[None],       dtype=tf.string,   )   features = tf.parse_example(       example_bytestring,       tf.feature_column.make_parse_example_spec(feature_columns)   )   return tf.estimator.export.ServingInputReceiver(       features, {'examples': example_bytestring})  export_dir = classifier.export_savedmodel(     export_dir_base="/path/to/model",     serving_input_receiver_fn=serving_input_receiver_fn) 

The reader is referred to TensorFlow Serving's documentation for more instructions on how to setup TensorFlow Serving, so I'll only provide the client code here:

  # Omitting a bunch of connection/initialization code...   # But at some point we end up with a stub whose lifecycle   # is generally longer than that of a single request.   stub = create_stub(...)    # The actual values for prediction. We have two examples in this   # case, each consisting of a single, multi-dimensional feature `x`.   # This data here is the equivalent of the map passed to the    # `predict_fn` in use case #2.   examples = [     tf.train.Example(       features=tf.train.Features(         feature={"x": tf.train.Feature(           float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]))})),     tf.train.Example(       features=tf.train.Features(         feature={"x": tf.train.Feature(           float_list=tf.train.FloatList(value=[5.8, 3.1, 5.0, 1.7]))})),   ]    # Build the RPC request.   predict_request = predict_pb2.PredictRequest()   predict_request.model_spec.name = "default"   predict_request.inputs["examples"].CopyFrom(       tensor_util.make_tensor_proto(examples, tf.float32))    # Perform the actual prediction.   stub.Predict(request, PREDICT_DEADLINE_SECS) 

Note that the key, examples, that is referenced in the predict_request.inputs needs to match the key used in the serving_input_receiver_fn at export time (cf. the constructor to ServingInputReceiver in that code).

Appendix: Working around Exports from Canned Models in TF 1.3

There appears to be a bug in TensorFlow 1.3 in which canned models do not export properly for use case 2 (the problem does not exist for "custom" estimators). Here's is a workaround that wraps a DNNClassifier to make things work, specifically for the Iris example:

# Build 3 layer DNN with 10, 20, 10 units respectively. class Wrapper(tf.estimator.Estimator):   def __init__(self, **kwargs):     dnn = tf.estimator.DNNClassifier(**kwargs)      def model_fn(mode, features, labels):       spec = dnn._call_model_fn(features, labels, mode)       export_outputs = None       if spec.export_outputs:         export_outputs = {            "serving_default": tf.estimator.export.PredictOutput(                   {"scores": spec.export_outputs["serving_default"].scores,                    "classes": spec.export_outputs["serving_default"].classes})}        # Replace the 3rd argument (export_outputs)       copy = list(spec)       copy[4] = export_outputs       return tf.estimator.EstimatorSpec(mode, *copy)      super(Wrapper, self).__init__(model_fn, kwargs["model_dir"], dnn.config)  classifier = Wrapper(feature_columns=feature_columns,                      hidden_units=[10, 20, 10],                      n_classes=3,                      model_dir="/tmp/iris_model") 
like image 183
rhaertel80 Avatar answered Sep 22 '22 12:09

rhaertel80