Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to build TensorFlow lite with select TensorFlow ops for x86_64 systems?

To be able to run a TensorFlow lite model that supports native TensorFlow operations, the libtensorflow-lite static library has to be re-compiled. The instructions for doing this in C++ can be found HERE.

It states that

When building TensorFlow Lite libraries using the bazel pipeline, the additional TensorFlow ops library can be included and enabled as follows:

  • Enable monolithic builds if necessary by adding the --config=monolithic build flag.

  • Add the TensorFlow ops delegate library dependency to the build dependencies: tensorflow/lite/delegates/flex:delegate.

Note that the necessary TfLiteDelegate will be installed automatically when creating the interpreter at runtime as long as the delegate is linked into the client library. It is not necessary to explicitly install the delegate instance as is typically required with other delegate types.

The thing is that the standard way of building the static lib is via a shell script/make (see the docs HERE; this is for arm64, but there are scripts that can be used for x86_64 as well). There's no obvious way for me to build tensorflow-lite via bazel and modify the build commands there.

Has anybody successfully built this when trying to build models for arm64/x86_64 architectures and can share this? I'm new to bazel and cannot find a detailed walkthrough.

EDIT

After troubleshooting steps proposed by @jdehesa, I was able to build libtensorflowlite.so, but ran into another problem. My app built successfully, but upon execution of the app, the .so file cannot be found:

./myapp: error while loading shared libraries: libtensorflowlite.so: cannot open shared object file: No such file or directory

The paths are correct due to other .so files being located in the same directory which can be found. Also, the app works if using the static library.

To reproduce the issue, I used the tensorflow/tensorflow:devel-gpu-py3 docker build image (instructions found here).

I executed the configure script with default settings, and used the command

bazel build --config=monolithic --define=with_select_tf_ops=true -c opt //tensorflow/lite:libtensorflowlite.so

to create the library. I have uploaded by built library on my personal repo (https://github.com/DocDriven/debug-lite).

like image 989
DocDriven Avatar asked Oct 30 '19 11:10

DocDriven


People also ask

Is TensorFlow Lite faster than TensorFlow?

Using TensorFlow Lite we see a considerable speed increase when compared with the original results from our previous benchmarks using full TensorFlow. We see an approximately ×2 increase in inferencing speed between the original TensorFlow figures and the new results using TensorFlow Lite.

How can I reduce the binary size of my TensorFlow Lite model?

When using a TensorFlow Lite model that has been converted with support for select TensorFlow ops, the client must also use a TensorFlow Lite runtime that includes the necessary library of TensorFlow ops. To reduce the binary size, please build your own custom AAR files as guided in the next section.

Can I use TensorFlow OPS with TensorFlow Lite models?

For details, refer to operator compatibility. To allow conversion, users can enable the usage of certain TensorFlow ops in their TensorFlow Lite model. However, running TensorFlow Lite models with TensorFlow ops requires pulling in the core TensorFlow runtime, which increases the TensorFlow Lite interpreter binary size.

How to enable TensorFlow Lite delegate in Xcode?

In your Xcode project, go to Build Settings -> Other Linker Flags, and add: If you're using Bazel or CMake to build TensorFlow Lite interpreter, you can enable Flex delegate by linking a TensorFlow Lite Flex delegate shared library.

Is there a static library for TensorFlow Lite?

Build TensorFlow Lite Note: This generates a static library libtensorflow-lite.a in the current directory but the library isn't self-contained since all the transitive dependencies are not included. To use the library properly, you need to create a CMake project.


1 Answers

EDIT: It seems the experimental option with_select_tf_ops was removed shortly after this was posted. As far as I can tell, there does not seem to be any builtin option to include the TF delegate library in the current build script for libtensorflowlite. If you want to build the library with Bazel, it seems the only option at the moment is to include tensorflow/lite/delegates/flex:delegate in the list of target dependencies, as suggested in the docs.

A few days ago a commit was submitted with initial support for building TFLite with CMake. In that build script there is an option SELECT_TF_OPS to include the delegates library in the build. I don't know if that build works at the moment, but I suppose it will become part of an upcoming official release eventually.


It appears that libtensorflow-lite.a is built with Makefiles, out of Bazel, so I'm not sure if you can actually use that option for that library. There is however an experimental shared library target libtensorflowlite.so that I think may be what you need. You can give the experimental option with_select_tf_ops to include TensorFlow kernels in it. So I think the build command would be something like:

bazel build --config=monolithic --define=with_select_tf_ops=true -c opt //tensorflow/lite:libtensorflowlite.so
like image 177
jdehesa Avatar answered Oct 17 '22 08:10

jdehesa