implementation 'org.tensorflow:tensorflow-lite:+'
under my build.gradle
dependenciesInference times are not so great, so now I want to use TFL in Android's NDK.
So I built an exact copy of the Java app in Android Studio's NDK, and now I'm trying to include the TFL libs in the project. I followed TensorFlow-Lite's Android guide and built the TFL library locally (and got an AAR file), and included the library in my NDK project in Android Studio.
Now I'm trying to use the TFL library in my C++ file, by trying to #include
it in code, but I get an error message: cannot find tensorflow
(or any other name I'm trying to use, according to the name I give it in my CMakeLists.txt
file).
App build.gradle:
apply plugin: 'com.android.application'
android {
compileSdkVersion 29
buildToolsVersion "29.0.3"
defaultConfig {
applicationId "com.ndk.tflite"
minSdkVersion 28
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
externalNativeBuild {
cmake {
cppFlags ""
}
}
ndk {
abiFilters 'arm64-v8a'
}
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
// tf lite
aaptOptions {
noCompress "tflite"
}
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"
version "3.10.2"
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
// tflite build
compile(name:'tensorflow-lite', ext:'aar')
}
Project build.gradle:
buildscript {
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.6.2'
}
}
allprojects {
repositories {
google()
jcenter()
// native tflite
flatDir {
dirs 'libs'
}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
CMakeLists.txt:
cmake_minimum_required(VERSION 3.4.1)
add_library( # Sets the name of the library.
native-lib
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
native-lib.cpp )
add_library( # Sets the name of the library.
tensorflow-lite
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
native-lib.cpp )
find_library( # Sets the name of the path variable.
log-lib
# Specifies the name of the NDK library that
# you want CMake to locate.
log )
target_link_libraries( # Specifies the target library.
native-lib tensorflow-lite
# Links the target library to the log library
# included in the NDK.
${log-lib} )
native-lib.cpp:
#include <jni.h>
#include <string>
#include "tensorflow"
extern "C" JNIEXPORT jstring JNICALL
Java_com_xvu_f32c_1jni_MainActivity_stringFromJNI(
JNIEnv* env,
jobject /* this */) {
std::string hello = "Hello from C++";
return env->NewStringUTF(hello.c_str());
}
class FlatBufferModel {
// Build a model based on a file. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromFile(
const char* filename,
ErrorReporter* error_reporter);
// Build a model based on a pre-loaded flatbuffer. The caller retains
// ownership of the buffer and should keep it alive until the returned object
// is destroyed. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
const char* buffer,
size_t buffer_size,
ErrorReporter* error_reporter);
};
I also tried to follow these:
but in my case I used Bazel to build the TFL libs.
Trying to build the classification demo of (label_image), I managed to build it and adb push
to my device, but when trying to run I got the following error:
ERROR: Could not open './mobilenet_quant_v1_224.tflite'.
Failed to mmap model ./mobilenet_quant_v1_224.tflite
android_sdk_repository
/ android_ndk_repository
in WORKSPACE
got me an error: WORKSPACE:149:1: Cannot redefine repository after any load statement in the WORKSPACE file (for repository 'androidsdk')
, and locating these statements at different places resulted in the same error.WORKSPACE
and continued with zimenglyu's post: I've compiled libtensorflowLite.so
, and edited CMakeLists.txt
so that the libtensorflowLite.so
file was referenced, but left the FlatBuffer
part out. The Android project compiled successfully, but there was no evident change, I still can't include any TFLite libraries.Trying to compile TFL, I added a cc_binary
to tensorflow/tensorflow/lite/BUILD
(following the label_image example):
cc_binary(
name = "native-lib",
srcs = [
"native-lib.cpp",
],
linkopts = tflite_experimental_runtime_linkopts() + select({
"//tensorflow:android": [
"-pie",
"-lm",
],
"//conditions:default": [],
}),
deps = [
"//tensorflow/lite/c:common",
"//tensorflow/lite:framework",
"//tensorflow/lite:string_util",
"//tensorflow/lite/delegates/nnapi:nnapi_delegate",
"//tensorflow/lite/kernels:builtin_ops",
"//tensorflow/lite/profiling:profiler",
"//tensorflow/lite/tools/evaluation:utils",
] + select({
"//tensorflow:android": [
"//tensorflow/lite/delegates/gpu:delegate",
],
"//tensorflow:android_arm64": [
"//tensorflow/lite/delegates/gpu:delegate",
],
"//conditions:default": [],
}),
)
and trying to build it for x86_64
, and arm64-v8a
I get an error: cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'x86_64'
.
Checking external/local_config_cc/BUILD
(which provided the error) in line 47:
cc_toolchain_suite(
name = "toolchain",
toolchains = {
"k8|compiler": ":cc-compiler-k8",
"k8": ":cc-compiler-k8",
"armeabi-v7a|compiler": ":cc-compiler-armeabi-v7a",
"armeabi-v7a": ":cc-compiler-armeabi-v7a",
},
)
and these are the only 2 cc_toolchain
s found. Searching the repository for "cc-compiler-" I only found "aarch64", which I assumed is for the 64-bit ARM, but nothing with "x86_64". There are "x64_windows", though - and I'm on Linux.
Trying to build with aarch64 like so:
bazel build -c opt --fat_apk_cpu=aarch64 --cpu=aarch64 --host_crosstool_top=@bazel_tools//tools/cpp:toolchain //tensorflow/lite/java:tensorflow-lite
results in an error:
ERROR: /.../external/local_config_cc/BUILD:47:1: in cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'aarch64'
I was able to build the library for x86_64
architecture by changing the soname
in build config and using full paths in CMakeLists.txt
. This resulted in a .so
shared library. Also - I was able to build the library for arm64-v8a
using the TFLite Docker container, by adjusting the aarch64_makefile.inc
file, but I did not change any build options, and let build_aarch64_lib.sh
whatever it builds. This resulted in a .a
static library.
So now I have two TFLite libs, but I'm still unable to use them (I can't #include "..."
anything for example).
When trying to build the project, using only x86_64
works fine, but trying to include the arm64-v8a
library results in ninja error: '.../libtensorflow-lite.a', needed by '.../app/build/intermediates/cmake/debug/obj/armeabi-v7a/libnative-lib.so', missing and no known rule to make it
.
lite
directory, and created a similar structure in app/src/main/cpp
, in which I include the (A) tensorflow, (B) absl and (C) flatbuffers files#include "tensorflow/...
lines in all of tensorflow's header files to relative paths so the compiler can find them.build.gradle
I added a no-compression line for the .tflite
file: aaptOptions { noCompress "tflite" }
assets
directory to the appnative-lib.cpp
I added some example code from the TFLite website
arm64-v8a
).I get an error:
/path/to/Android/Sdk/ndk/20.0.5594570/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/memory:2339: error: undefined reference to 'tflite::impl::Interpreter::~Interpreter()'
in <memory>
, line 2339 is the "delete __ptr;"
line:
_LIBCPP_INLINE_VISIBILITY void operator()(_Tp* __ptr) const _NOEXCEPT {
static_assert(sizeof(_Tp) > 0,
"default_delete can not delete incomplete type");
static_assert(!is_void<_Tp>::value,
"default_delete can not delete incomplete type");
delete __ptr;
}
How can I include the TFLite libraries in Android Studio, so I can run a TFL inference from the NDK?
Alternatively - how can I use gradle (currently with cmake) to build and compile the source files?
I use Native TFL with C-API in the following way:
.arr
file to .zip
and unzip the file to get the shared library (.so
file)c
directory in the TFL repository
jni
directory (New
-> Folder
-> JNI Folder
) in app/src/main
and also create architecture sub-directories in it (arm64-v8a
or x86_64
for example)jni
directory (next to the architecture directories), and put the shared library inside the architecture directory/iesCMakeLists.txt
file and include an add_library
stanza for the TFL library, the path to the shared library in a set_target_properties
stanza and the headers in include_directories
stanza (see below, in NOTES section)In native-lib.cpp
include the headers, for example:
#include "../jni/c_api.h"
#include "../jni/common.h"
#include "../jni/builtin_ops.h"
TFL functions can be called directly, for example:
TfLiteModel * model = TfLiteModelCreateFromFile(full_path);
TfLiteInterpreter * interpreter = TfLiteInterpreterCreate(model);
TfLiteInterpreterAllocateTensors(interpreter);
TfLiteTensor * input_tensor =
TfLiteInterpreterGetInputTensor(interpreter, 0);
const TfLiteTensor * output_tensor =
TfLiteInterpreterGetOutputTensor(interpreter, 0);
TfLiteStatus from_status = TfLiteTensorCopyFromBuffer(
input_tensor,
input_data,
TfLiteTensorByteSize(input_tensor));
TfLiteStatus interpreter_invoke_status = TfLiteInterpreterInvoke(interpreter);
TfLiteStatus to_status = TfLiteTensorCopyToBuffer(
output_tensor,
output_data,
TfLiteTensorByteSize(output_tensor));
cmake
environment also included cppFlags "-frtti -fexceptions"
CMakeLists.txt
example:
set(JNI_DIR ${CMAKE_CURRENT_SOURCE_DIR}/../jni)
add_library(tflite-lib SHARED IMPORTED)
set_target_properties(tflite-lib
PROPERTIES IMPORTED_LOCATION
${JNI_DIR}/${ANDROID_ABI}/libtfl.so)
include_directories( ${JNI_DIR} )
target_link_libraries(
native-lib
tflite-lib
...)
I have also struggled with building TF Lite C++ APIs for Android. Fortunately, I managed to make it work.
The problem is we need to configure the Bazel build process before running the bazel build ...
commands. The TF Lite Android Quick Start guide doesn't mention it.
Step-by-step guide (https://github.com/cuongvng/TF-Lite-Cpp-API-for-Android):
Step 1: Install Bazel
Step 2: Clone the TensorFlow repo
git clone https://github.com/tensorflow/tensorflow
cd ./tensorflow/
bazel build ...
command, you need to configure the build process. Do so by executing./configure
The configure
file is at the root of the tensorflow directory, which you cd
to at Step 2.
Now you have to input some configurations on the command line:
$ ./configure
You have bazel 3.7.2-homebrew installed.
Please specify the location of python. [Default is /Library/Developer/CommandLineTools/usr/bin/python3]: /Users/cuongvng/opt/miniconda3/envs/style-transfer-tf-lite/bin/python
First is the location of python, because ./configure
executes the .configure.py
file.
Choose the location that has Numpy installed, otherwise the later build will fail.
Here I point it to the python executable of a conda environment.
Next,
Found possible Python library paths:
/Users/cuongvng/opt/miniconda3/envs/style-transfer-tf-lite/lib/python3.7/site-packages
Please input the desired Python library path to use. Default is [/Users/cuongvng/opt/miniconda3/envs/style-transfer-tf-lite/lib/python3.7/site-packages]
I press Enter
to use the default site-packages, which contains necessary libraries to build TF.
Next,
Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: N
No CUDA support will be enabled for TensorFlow.
Do you wish to download a fresh release of clang? (Experimental) [y/N]: N
Clang will not be downloaded.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]:
Key in as showed above, on the last line type Enter
.
Then it asks you whether to configure ./WORKSPACE for Android builds, type y to add configurations.
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: y
Searching for NDK and SDK installations.
Please specify the home path of the Android NDK to use. [Default is /Users/cuongvng/library/Android/Sdk/ndk-bundle]: /Users/cuongvng/Library/Android/sdk/ndk/21.1.6352462
That is the home path of the Android NDK (version 21.1.6352462) on my local machine.
Note that when you ls
the path, it must include platforms
, e.g.:
$ ls /Users/cuongvng/Library/Android/sdk/ndk/21.1.6352462
CHANGELOG.md build ndk-stack prebuilt source.properties wrap.sh
NOTICE meta ndk-which python-packages sources
NOTICE.toolchain ndk-build package.xml shader-tools sysroot
README.md ndk-gdb platforms simpleperf toolchains
For now I ignore the resulting WARNING, then choose the min NDK API level
WARNING: The NDK version in /Users/cuongvng/Library/Android/sdk/ndk/21.1.6352462 is 21, which is not supported by Bazel (officially supported versions: [10, 11, 12, 13, 14, 15, 16, 17, 18]). Please use another version. Compiling Android targets may result in confusing errors.
Please specify the (min) Android NDK API level to use. [Available levels: ['16', '17', '18', '19', '21', '22', '23', '24', '26', '27', '28', '29']] [Default is 21]: 29
Next
Please specify the home path of the Android SDK to use. [Default is /Users/cuongvng/library/Android/Sdk]: /Users/cuongvng/Library/Android/sdk
Please specify the Android SDK API level to use. [Available levels: ['28', '29', '30']] [Default is 30]: 30
Please specify an Android build tools version to use. [Available versions: ['29.0.2', '29.0.3', '30.0.3', '31.0.0-rc1']] [Default is 31.0.0-rc1]: 30.0.3
That is all for Android build configs. Choose N
for all questions appearing later:
.so
)
Now you can run the bazel build command to generate libraries for your target architecture:bazel build -c opt --config=android_arm //tensorflow/lite:libtensorflowlite.so
# or
bazel build -c opt --config=android_arm64 //tensorflow/lite:libtensorflowlite.so
It should work without errors.
The generated library would be saved at ./bazel-bin/tensorflow/lite/libtensorflowlite.so
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With