Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to create a tensorflow serving client for the 'wide and deep' model?

I've created a model based on the 'wide and deep' example (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py).

I've exported the model as follows:

  m = build_estimator(model_dir)
  m.fit(input_fn=lambda: input_fn(df_train, True), steps=FLAGS.train_steps)
  results = m.evaluate(input_fn=lambda: input_fn(df_test, True), steps=1)

  print('Model statistics:')

  for key in sorted(results):
    print("%s: %s" % (key, results[key]))

  print('Done training!!!')

  # Export model
  export_path = sys.argv[-1]
  print('Exporting trained model to %s' % export_path)

  m.export(
   export_path,
   input_fn=serving_input_fn,
   use_deprecated_input_fn=False,
   input_feature_key=INPUT_FEATURE_KEY

My question is, how do I create a client to make predictions from this exported model? Also, have I exported the model correctly?

Ultimately I need to be able do this in Java too. I suspect I can do this by creating Java classes from proto files using gRPC.

Documentation is very sketchy, hence why I am asking on here.

Many thanks!

like image 499
Adam Avatar asked Jan 17 '17 12:01

Adam


1 Answers

I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model.

TL;DR

To export an estimator there are four steps:

  1. Define features for export as a list of all features used during estimator initialization.

  2. Create a feature config using create_feature_spec_for_parsing.

  3. Build a serving_input_fn suitable for use in serving using input_fn_utils.build_parsing_serving_input_fn.

  4. Export the model using export_savedmodel().

To run a client script properly you need to do three following steps:

  1. Create and place your script somewhere in the /serving/ folder, e.g. /serving/tensorflow_serving/example/

  2. Create or modify corresponding BUILD file by adding a py_binary.

  3. Build and run a model server, e.g. tensorflow_model_server.

  4. Create, build and run a client that sends a tf.Example to our tensorflow_model_server for the inference.

For more details look at the tutorial itself.

like image 113
MtDersvan Avatar answered Sep 25 '22 17:09

MtDersvan