Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Convert a TensorFlow model in a format that can be served

I am following Tensorflow serving documentation to convert my trained model into a format that can be served in Docker container. As I'm new to Tensorflow, I am struggling to convert this trained model into a form that will be suitable for serving.

The model is already trained and I have the checkpoint file and .meta file. So, I need to get the .pb file and variables folder from the above two files. Can anyone please suggest me an approach on how to get this done for serving the models?

.
|-- tensorflow model
|       -- 1
|       |-- saved_model.pb
|       -- variables
|           |-- variables.data-00000-of-00001
|           -- variables.index
like image 813
Ashish Avatar asked Oct 16 '22 13:10

Ashish


2 Answers

There is multiple ways of doing this, and other methods could be required for more complex models. I am currently using the method described here, which works great for tf.keras.models.Model and tf.keras.Sequential models (not sure for tensorflow subclassing?).

Below is a minimal working example, including creating a model using python (it seems like you have already completed this by your folder structure and can ignore the first step)

import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
import tensorflow.keras.backend as K

inputs = Input(shape=(2,))
x = Dense(128, activation='relu')(inputs)
x = Dense(32, activation='relu')(x)
outputs = Dense(1)(x)

model = Model(inputs=inputs, outputs=outputs)
model.compile(optimizer='adam', loss='mse')

# loading existing weights, model architectural must be the same as the existing model
#model.load_weights(".//MODEL_WEIGHT_PATH//WEIGHT.h5") 

export_path = 'SAVE_PATH//tensorflow_model//1'

with K.get_session() as sess:
    tf.saved_model.simple_save(
            sess,
            export_path,
            inputs={'inputs': model.input}, # for single input
            #inputs={t.name[:-5]: t for t in model.input}, # for multiple inputs
            outputs={'outputs': model.output})

I suggest you use folder name "tensorflow_model" instead of "tensorflow model", to avoid possible problems with spaces.

Then we can build the docker image in terminal by (for windows, use ^ instead of \ for line brake, and use //C/ instead of C:\ in path):

docker run -p 8501:8501 --name tfserving_test \
  --mount type=bind,source="SAVE_PATH/tensorflow_model",target=/models/tensorflow_model \
  -e MODEL_NAME=tensorflow_model -t tensorflow/serving

Now the container should be up and running, and we can test the serving with python

import requests
import json
#import numpy as np

payload = {
  "instances": [{'inputs': [1.,1.]}]
}

r = requests.post('http://localhost:8501/v1/models/tensorflow_model:predict', json=payload)
print(json.loads(r.content))
# {'predictions': [[0.121025]]}

The container is working with our model, giving the prediction 0.121025 for the input [1., 1.]

like image 68
KrisR89 Avatar answered Oct 21 '22 03:10

KrisR89


I hope this helps:

import tensorflow as tf
from tensorflow.contrib.keras import backend as K
from tensorflow.python.client import device_lib

K.set_learning_phase(0)
model = tf.keras.models.load_model('my_model.h5')


export_path = './'
with K.get_session() as sess:
    tf.saved_model.simple_save(
        sess,
        export_path,
        inputs={'input_image': model.input},
        outputs={t.name: t for t in model.outputs}
    )
    print('Converted to SavedModel!!!')
like image 26
Yesken Avatar answered Oct 21 '22 05:10

Yesken