Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

deploy TFX with existing frozen_interface_graph.pb and label_map.pbtxt

I have trained an object detection model with a fasterR-CNN network and has the frozen_interface_graph.pb and label_map.pbtxt after training. I wanted to deploy it as a RESTAPI server so that it can be called from systems that do not have Tensorflow. That's when I came across TFX.

How I can use TFX tensorflow-model-server to load this model and host the RESTAPI so that I can send images for prediction as POST request?

https://www.tensorflow.org/tfx/tutorials/serving/rest_simple This is what I found as a reference, but the models are of a different format than what I have currently. Is there any mechanism in which I can reuse the model I currently have or will I have to retrain using Keras and deploy as shown in the reference.

like image 493
Sreekiran A R Avatar asked Oct 16 '22 04:10

Sreekiran A R


1 Answers

To reuse your model for TFX, a frozen graph needs to have a serving signature specified. Tried converting your model into savedmodel format using the code below which successfully created a savedmodel.pb file with a tag-set "serve".

import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants

export_dir = './saved'
graph_pb = 'frozen_inference_graph.pb'

builder = tf.saved_model.builder.SavedModelBuilder(export_dir)

with tf.gfile.GFile(graph_pb, "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

sigs = {}

with tf.Session(graph=tf.Graph()) as sess:
    # name="" is important to ensure we don't get spurious prefixing
    tf.import_graph_def(graph_def, name="")
    g = tf.get_default_graph()
    sess.graph.get_operations()
    inp = g.get_tensor_by_name("image_tensor:0")
    outputs = {}
    outputs["detection_boxes"] = g.get_tensor_by_name('detection_boxes:0')
    outputs["detection_scores"] = g.get_tensor_by_name('detection_scores:0')
    outputs["detection_classes"] = g.get_tensor_by_name('detection_classes:0')
    outputs["num_detections"] = g.get_tensor_by_name('num_detections:0')

    output_tensor = tf.concat([tf.expand_dims(t, 0) for t in outputs], 0)


    sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
        tf.saved_model.signature_def_utils.predict_signature_def(
            {"in": inp}, {"out": out})

    sigs["predict_images"] = \
    tf.saved_model.signature_def_utils.predict_signature_def(
        {"in": inp}, {"out": output_tensor} )

    builder.add_meta_graph_and_variables(sess,
                                         [tag_constants.SERVING],
                                         signature_def_map=sigs)

builder.save().

We have tested the converted model by predicting the sample image that you provided. The result doesn't show any prediction which probably means the conversion method doesn't work as expected.

As for your question:

"Is there any mechanism in which I can reuse the model I currently have or will I have to retrain using Keras and deploy as shown in the reference?"

With this result, it is way better to just retrain your model using Keras as the answer to your question because converting or reusing your frozen graph model isn't going to be the solution. Your model does not save variables that are required for serving the model and the model format is not suitable for a production environment. And yes, it is the best way to follow the official documentation as you will be assured that this would work.

like image 199
TF_Support Avatar answered Nov 15 '22 09:11

TF_Support