I am trying to get a TensorFlow Lite example to run on a machine with an ARM Cortex-A72 processor. Unfortunately, I wasn't able to deploy a test model due to the lack of examples on how to use the C++ API. I will try to explain what I have achieved so far.
Create the tflite model
I have created a simple linear regression model and converted it, which should approximate the function f(x) = 2x - 1
. I got this code snippet from some tutorial, but I am unable to find it anymore.
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.contrib import lite
model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([ -1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([ -3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([10.0]))
keras_file = 'linear.h5'
keras.models.save_model(model, keras_file)
converter = lite.TocoConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open('linear.tflite', 'wb').write(tflite_model)
This creates a binary called linear.tflite
, which I should be able to load.
Compile TensorFlow Lite for my machine
TensorFlow Lite comes with a script for the compilation on machines with the aarch64 architecture. I followed the guide here to do this, even though I had to modify the Makefile slightly. Note that I compiled this natively on my target system. This created a static library called libtensorflow-lite.a
.
Problem: Inference
I tried to follow the tutorial on the site here, and simply pasted the the code snippets from loading and running the model together, e.g.
class FlatBufferModel {
// Build a model based on a file. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromFile(
const char* filename,
ErrorReporter* error_reporter);
// Build a model based on a pre-loaded flatbuffer. The caller retains
// ownership of the buffer and should keep it alive until the returned object
// is destroyed. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
const char* buffer,
size_t buffer_size,
ErrorReporter* error_reporter);
};
tflite::FlatBufferModel model("./linear.tflite");
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
// Resize input tensors, if desired.
interpreter->AllocateTensors();
float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
When trying to compile this via
g++ demo.cpp libtensorflow-lite.a
I get a load of errors. Log:
root@localhost:/inference# g++ demo.cpp libtensorflow-lite.a
demo.cpp:3:15: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
static std::unique_ptr<FlatBufferModel> BuildFromFile(
^~~~~~~~~~
demo.cpp:10:15: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
^~~~~~~~~~
demo.cpp:16:1: error: ‘tflite’ does not name a type
tflite::FlatBufferModel model("./linear.tflite");
^~~~~~
demo.cpp:18:1: error: ‘tflite’ does not name a type
tflite::ops::builtin::BuiltinOpResolver resolver;
^~~~~~
demo.cpp:19:6: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
std::unique_ptr<tflite::Interpreter> interpreter;
^~~~~~~~~~
demo.cpp:20:1: error: ‘tflite’ does not name a type
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
^~~~~~
demo.cpp:23:1: error: ‘interpreter’ does not name a type
interpreter->AllocateTensors();
^~~~~~~~~~~
demo.cpp:25:16: error: ‘interpreter’ was not declared in this scope
float* input = interpreter->typed_input_tensor<float>(0);
^~~~~~~~~~~
demo.cpp:25:48: error: expected primary-expression before ‘float’
float* input = interpreter->typed_input_tensor<float>(0);
^~~~~
demo.cpp:28:1: error: ‘interpreter’ does not name a type
interpreter->Invoke();
^~~~~~~~~~~
demo.cpp:30:17: error: ‘interpreter’ was not declared in this scope
float* output = interpreter->typed_output_tensor<float>(0);
^~~~~~~~~~~
demo.cpp:30:50: error: expected primary-expression before ‘float’
float* output = interpreter->typed_output_tensor<float>(0);
I am relatively new to C++, so I may be missing something obvious here. It seems, however, that other people have trouble with the C++ API as well (look at this GitHub issue). Has anybody also stumbled across this and got it to run?
The most important aspects for me to cover would be:
1.) Where and how do I define the signature, so that the model knows what to treat as inputs and outputs?
2.) Which headers do I have to include?
Thanks!
EDIT
Thanks to @Alex Cohn, the linker was able to find the correct headers. I also realized that I probably do not need to redefine the flatbuffers class, so I ended up with this code (minor change is marked):
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"
auto model = tflite::FlatBufferModel::BuildFromFile("linear.tflite"); //CHANGED
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
// Resize input tensors, if desired.
interpreter->AllocateTensors();
float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
This reduces the number of errors greatly, but I am not sure how to resolve the rest:
root@localhost:/inference# g++ demo.cpp -I/tensorflow
demo.cpp:10:34: error: expected ‘)’ before ‘,’ token
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
^
demo.cpp:10:44: error: expected initializer before ‘)’ token
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
^
demo.cpp:13:1: error: ‘interpreter’ does not name a type
interpreter->AllocateTensors();
^~~~~~~~~~~
demo.cpp:18:1: error: ‘interpreter’ does not name a type
interpreter->Invoke();
^~~~~~~~~~~
How do I have to tackle these? It seems that I have to define my own resolver, but I have no clue on how to do that.
I finally got it to run. Considering my directory structure looks like this:
/(root)
/tensorflow
# whole tf repo
/demo
demo.cpp
linear.tflite
libtensorflow-lite.a
I changed demo.cpp
to
#include <stdio.h>
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"
int main(){
std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("linear.tflite");
if(!model){
printf("Failed to mmap model\n");
exit(0);
}
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model.get(), resolver)(&interpreter);
// Resize input tensors, if desired.
interpreter->AllocateTensors();
float* input = interpreter->typed_input_tensor<float>(0);
// Dummy input for testing
*input = 2.0;
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
printf("Result is: %f\n", *output);
return 0;
}
Also, I had to adapt my compile command (I had to install flatbuffers manually to make it work). What worked for me was:
g++ demo.cpp -I/tensorflow -L/demo -ltensorflow-lite -lrt -ldl -pthread -lflatbuffers -o demo
Thanks to @AlexCohn for getting me on the right track!
Here is the minimal set of includes:
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"
These will include other headers, e.g. <memory>
which defines std::unique_ptr
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With