I tried the inception tutorial in tensorflow site: https://www.tensorflow.org/versions/r0.12/how_tos/image_retraining/ the bazel build is done successfully but when I try to predict an image class wth this command:
bazel build tensorflow/examples/label_image:label_image && \
bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \
--output_layer=final_result \
--image=$HOME/flower_photos/daisy/21652746_cc379e0eea_m.jpg
I have this error: tensorflow/examples/label_image/main.cc:305] Running model failed: Not found: FeedInputs: unable to find feed output input
How can I solve this problem
The TensorProcessor can be used to quantize input tensors or dequantize output tensors. For example, when processing a quantized output TensorBuffer, the developer can use DequantizeOp to dequantize the result to a floating point probability between 0 and 1:
However, the TensorFlow Lite interpreter API that runs the on-device machine learning model uses tensors in the form of ByteBuffer, which can be difficult to debug and manipulate.
For example, when processing a quantized output TensorBuffer, the developer can use DequantizeOp to dequantize the result to a floating point probability between 0 and 1:
This thread helped me to fix this issue.
It seems that we need to provide --input_layer
with Tensorflow 1.0+.
In your case, this should fix the problem:
bazel build tensorflow/examples/label_image:label_image && \
bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \
--output_layer=final_result \
--image=$HOME/flower_photos/daisy/21652746_cc379e0eea_m.jpg \
--input_layer=Mul
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With