I have the demo.sh working fine and I've looked at the parser_eval.py and grokked it all to some extent. However, I don't see how to serve this model using TensorFlow Serving. There are two issues I can see off the top:
1) There's no exported model for these graphs, the graph is built at each invocation using a graph builder (e.g. structured_graph_builder.py), a context protocol buffer, and a whole bunch of other stuff that I don't understand fully at this point (it seems to register additional syntaxnet.ops as well). So... is it possible, and how would I export these models into the "bundle" form required by Serving and the SessionBundleFactory
? If not, it seems the graph building logic / steps will need to be re-implemented in C++ because the Serving only runs in C++ context.
2) demo.sh is actually two models literally piped together with UNIX pipe, so any Servable would have to (problably) build two sessions and marshal the data from one to the other. Is this a correct approach? Or is it possible to build a "big" graph containing both models "patched" together and export that instead?
So after a lot of learning, research etc. I ended up putting together a pull request for tensorflow/models and syntaxnet which achieves the goal of serving Parsey McParseface from TF serving.
https://github.com/tensorflow/models/pull/250
What's NOT here is the actual "serving" code, but that is relatively trivial compared to the work to resolve the issues in the above question.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With