I'm trying to test simple tensorflow lite c++ code with TensorflowLite model.
It gets two floats and do xor. However when I change inputs, output doesn't change. I guess the line interpreter->typed_tensor<float>(0)[0] = x
is wrong so inputs aren't properly applied. How should I change the code to work?
This is my code
#include <stdio.h>
#include <stdlib.h>
#include <string>
#include <vector>
#include "tensorflow/contrib/lite/kernels/register.h"
#include "tensorflow/contrib/lite/model.h"
#include "tensorflow/contrib/lite/string_util.h"
#include "tensorflow/contrib/lite/tools/mutable_op_resolver.h"
int main(){
const char graph_path[14] = "xorGate.lite";
const int num_threads = 1;
std::string input_layer_type = "float";
std::vector<int> sizes = {2};
float x,y;
std::unique_ptr<tflite::FlatBufferModel> model(
tflite::FlatBufferModel::BuildFromFile(graph_path));
if(!model){
printf("Failed to mmap model\n")
exit(0);
}
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
if(!interpreter){
printf("Failed to construct interpreter\n");
exit(0);
}
interpreter->UseNNAPI(false);
if(num_threads != 1){
interpreter->SetNumThreads(num_threads);
}
int input = interpreter->inputs()[0];
interpreter->ResizeInputTensor(0, sizes);
if(interpreter->AllocateTensors() != kTfLiteOk){
printf("Failed to allocate tensors\n");
exit(0);
}
//read two numbers
std::printf("Type two float numbers : ");
std::scanf("%f %f", &x, &y);
interpreter->typed_tensor<float>(0)[0] = x;
interpreter->typed_tensor<float>(0)[1] = y;
printf("hello\n");
fflush(stdout);
if(interpreter->Invoke() != kTfLiteOk){
std::printf("Failed to invoke!\n");
exit(0);
}
float* output;
output = interpreter->typed_output_tensor<float>(0);
printf("output = %f\n", output[0]);
return 0;
}
This is a message comes out when i run the code.
root@localhost:/home# ./a.out
nnapi error: unable to open library libneuralnetworks.so
Type two float numbers : 1 1
hello
output = 0.112958
root@localhost:/home# ./a.out
nnapi error: unable to open library libneuralnetworks.so
Type two float numbers : 0 1
hello
output = 0.112958
A Interpreter encapsulates a pre-trained TensorFlow Lite model, in which operations are executed for model inference. For example, if a model takes only one input and returns only one output: try (Interpreter interpreter = new Interpreter(file_of_a_tensorflowlite_model)) { interpreter.
In summary, if you can use tensorflow lite, I use it daily in Windows, MacOS, and Linux, it is not necessary to use Docker at all. Just a python file and that's it.
Resolved by changing
interpreter->typed_tensor<float>(0)[0] = x;
to
interpreter->typed_input_tensor<float>(0)[0] = x;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With