Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I export an eager execution model?

Having completed my model, I now wish to export and deploy it, following this tutorial on TensorFlow's website. However, there is no indication of how to do this in eager execution, where I am unable to provide a session or graph to builder.add_meta_graph_and_variables().

Is this a case where my code needs to be eager and graph compatible, or where I need to save my model, import it to a session, and export it from there?

like image 822
Jordan Patterson Avatar asked Nov 09 '18 17:11

Jordan Patterson


People also ask

What is eager execution mode?

Eager execution is a powerful execution environment that evaluates operations immediately. It does not build graphs, and the operations return actual values instead of computational graphs to run later.

How do I enable eager execution in TensorFlow?

Eager execution cannot be enabled after TensorFlow APIs have been used to create or execute graphs. It is typically recommended to invoke this function at program startup and not in a library (as most libraries should be usable both with and without eager execution).

What is lazy execution in context of TensorFlow?

By design, TensorFlow is based on lazy execution (though we can force eager execution). That means, it does not actually process the data available till it has to. It just gathers all the information that we feed into it. It processes only when we finally ask it to process.


1 Answers

TF 2.0 Alpha supports exporting/saving an Eager Execution Model (Default is Eager Execution in 2.0). A SavedModel contains a complete TensorFlow program, including weights and computation.

Sample code for the same is shown below:

!pip install -q tensorflow==2.0.0-alpha0
import tensorflow as tf

pretrained_model = tf.keras.applications.MobileNet()
tf.saved_model.save(pretrained_model, "/tmp/mobilenet/1/")

#Loading the saved model
loaded = tf.saved_model.load("/tmp/mobilenet/1/")
infer = loaded.signatures["serving_default"]

Serving the Model:

nohup tensorflow_model_server \
  --rest_api_port=8501 \
  --model_name=mobilenet \
  --model_base_path="/tmp/mobilenet" >server.log 2>&1

#Sending the Request for Inference

!pip install -q requests
import json
import numpy
import requests
data = json.dumps({"signature_name": "serving_default",
                   "instances": x.tolist()})
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/mobilenet:predict',
                              data=data, headers=headers)
predictions = numpy.array(json.loads(json_response.text)["predictions"])
like image 66
RakTheGeek Avatar answered Nov 10 '22 00:11

RakTheGeek