Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Logging requests being served by tensorflow serving model

I have built a model using tesnorflow serving and also ran it on server using this command:-

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 --model_name=ETA_DNN_Regressor --model_base_path=//apps/node-apps/tensorflow-models-repository/ETA

But now this screen is stagnant, not giving any info about incoming requests and resonses. I tried to use TF_CPP_MIN_VLOG_LEVEL=1 flag. But now it is giving so much output and still no logging/monitoring about incoming requests/responses.

Pls suggest how to view those logs.

Second problem I m facing is how to run this process in background and monitor it constantly. Lets suppose i closed the console then also this process should be running and how to reconnect that process console again and see real time traffic.

Any suggestions will be helpful.

like image 911
user3457384 Avatar asked Sep 22 '17 08:09

user3457384


People also ask

How does TensorFlow serving work?

Introduction. TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs.

What's the name of the package you install to get TensorFlow serving?

TensorFlow Serving Python API PIP package.


Video Answer


2 Answers

For rudimentary HTTP request logging, you can set TF_CPP_VMODULE=http_server=1 to set the VLOG level just for the module http_server.cc — that will get you a very bare request log showing incoming requests and some basic error cases:

2020-08-26 10:42:47.225542: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: POST /v1/models/mymodel:predict body: 761 bytes.
2020-08-26 10:44:32.472497: I tensorflow_serving/model_servers/http_server.cc:139] Ignoring HTTP request: GET /someboguspath
2020-08-26 10:51:36.540963: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: GET /v1/someboguspath body: 0 bytes.
2020-08-26 10:51:36.541012: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: GET /v1/someboguspath Error: Invalid argument: Malformed request: GET /v1/someboguspath
2020-08-26 10:53:17.039291: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: GET /v1/models/someboguspath body: 0 bytes.
2020-08-26 10:53:17.039456: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: GET /v1/models/someboguspath Error: Not found: Could not find any versions of model someboguspath
2020-08-26 11:01:43.466636: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: POST /v1/models/mymodel:predict body: 755 bytes.
2020-08-26 11:01:43.473195: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: POST /v1/models/mymodel:predict Error: Invalid argument: Incompatible shapes: [1,38,768] vs. [1,40,768]
     [[{{node model/transformer/embeddings/add}}]]
2020-08-26 11:02:56.435942: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: POST /v1/models/mymodel:predict body: 754 bytes.
2020-08-26 11:02:56.436762: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: POST /v1/models/mymodel:predict Error: Invalid argument: JSON Parse error: Missing a comma or ']' after an array element. at offset: 61

... you can skim https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/http_server.cc for occurrences of VLOG(1) << to see all logging statements in this module.

For gRPC probably there's some corresponding module that you can similarly enable VLOG for — I haven't gone looking for it.

like image 73
Gunnlaugur Briem Avatar answered Oct 23 '22 18:10

Gunnlaugur Briem


When you run this command below, you are starting a process of tensorflow model server which serves the model at a port number (9009 over here).

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 
--model_name=ETA_DNN_Regressor --model_base_path=//apps/node-apps/tensorflow- 
models-repository/ETA

You are not displaying the logs here,but the model server running. This is the reason why the screen is stagnant. You need to use the flag -v=1 when you run the above command to display the logs on your console

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server -v=1 --port=9009 --model_name='model_name' --model_base_path=model_path

Now moving to your logging/monitoring of incoming requests/responses. You cannot monitor the incoming requests/responses when the VLOG is set to 1. VLOGs is called Verbose logs. You need to use the log level 3 to display all errors, warnings, and some informational messages related to processing times (INFO1 and STAT1). Please look into the given link for further details on VLOGS. http://webhelp.esri.com/arcims/9.2/general/topics/log_verbose.htm

Now moving your second problem. I would suggest you to use environment variables provided by Tensorflow serving export TF_CPP_MIN_VLOG_LEVEL=3 instead of setting flags. Set the environment variable before you start the server. After that, please enter the below command to start your server and store the logs to a logfile named mylog

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 --model_name='model_name' --model_base_path=model_path &> my_log &. Even though you close your console, all the logs gets stored as the model server runs. Hope this helps.

like image 43
ReInvent_IO Avatar answered Oct 23 '22 20:10

ReInvent_IO