Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TensorFlow Serving: Update model_config (add additional models) at runtime

I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.

If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the model_config and trigger the server to reload it.

This functionality appears to exist (based on https://github.com/tensorflow/serving/pull/885 and https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22), but I can't find any documentation on how to actually use it.

I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself).

like image 871
Karl Avatar asked Jan 30 '19 12:01

Karl


3 Answers

So it took me ages of trawling through pull requests to finally find a code example for this. For the next person who has the same question as me, here is an example of how to do this. (You'll need the tensorflow_serving package for this; pip install tensorflow-serving-api).

Based on this pull request (which at the time of writing hadn't been accepted and was closed since it needed review): https://github.com/tensorflow/serving/pull/1065

from tensorflow_serving.apis import model_service_pb2_grpc
from tensorflow_serving.apis import model_management_pb2
from tensorflow_serving.config import model_server_config_pb2

import grpc

def add_model_config(host, name, base_path, model_platform):
  channel = grpc.insecure_channel(host) 
  stub = model_service_pb2_grpc.ModelServiceStub(channel)
  request = model_management_pb2.ReloadConfigRequest() 
  model_server_config = model_server_config_pb2.ModelServerConfig()

  #Create a config to add to the list of served models
  config_list = model_server_config_pb2.ModelConfigList()       
  one_config = config_list.config.add()
  one_config.name= name
  one_config.base_path=base_path
  one_config.model_platform=model_platform

  model_server_config.model_config_list.CopyFrom(config_list)

  request.config.CopyFrom(model_server_config)

  print(request.IsInitialized())
  print(request.ListFields())

  response = stub.HandleReloadConfigRequest(request,10)
  if response.status.error_code == 0:
      print("Reload sucessfully")
  else:
      print("Reload failed!")
      print(response.status.error_code)
      print(response.status.error_message)


add_model_config(host="localhost:8500", 
                    name="my_model", 
                    base_path="/models/my_model", 
                    model_platform="tensorflow")
like image 116
Karl Avatar answered Nov 05 '22 12:11

Karl


Add a model to TF Serving server and to the existing config file conf_filepath: Use arguments name, base_path, model_platform for the new model. Keeps the original models intact.

Notice a small difference from @Karl 's answer - using MergeFrom instead of CopyFrom

pip install tensorflow-serving-api

import grpc
from google.protobuf import text_format
from tensorflow_serving.apis import model_service_pb2_grpc, model_management_pb2
from tensorflow_serving.config import model_server_config_pb2


def add_model_config(conf_filepath, host, name, base_path, model_platform):
    with open(conf_filepath, 'r+') as f:
        config_ini = f.read()
    channel = grpc.insecure_channel(host)
    stub = model_service_pb2_grpc.ModelServiceStub(channel)
    request = model_management_pb2.ReloadConfigRequest()
    model_server_config = model_server_config_pb2.ModelServerConfig()
    config_list = model_server_config_pb2.ModelConfigList()
    model_server_config = text_format.Parse(text=config_ini, message=model_server_config)

    # Create a config to add to the list of served models
    one_config = config_list.config.add()
    one_config.name = name
    one_config.base_path = base_path
    one_config.model_platform = model_platform

    model_server_config.model_config_list.MergeFrom(config_list)
    request.config.CopyFrom(model_server_config)

    response = stub.HandleReloadConfigRequest(request, 10)
    if response.status.error_code == 0:
        with open(conf_filepath, 'w+') as f:
            f.write(request.config.__str__())
        print("Updated TF Serving conf file")
    else:
        print("Failed to update model_config_list!")
        print(response.status.error_code)
        print(response.status.error_message)
like image 3
maish.r Avatar answered Nov 05 '22 13:11

maish.r


While the solutions mentioned here works fine, there is one more method that you can use to hot-reload your models. You can use --model_config_file_poll_wait_seconds

As mentioned here in the documentation -

By setting the --model_config_file_poll_wait_seconds flag to instruct the server to periodically check for a new config file at --model_config_file filepath.

So, you just have to update the config file at model_config_path and tf-serving will load any new models and unload any models removed from the config file.


Edit 1: I looked at the source code and it seems that the flag is present from the very early version of tf-serving but there have been instances where some users were not able to use this flag (see this). So, try to use the latest version if possible.

like image 1
thisisbhavin Avatar answered Nov 05 '22 12:11

thisisbhavin