I have a tensorflow model that takes input images of varying size:
inputs = layers.Input(shape=(128,None,1), name='x_input')
<tf.Tensor 'x_input:0' shape=(?, 128, ?, 1) dtype=float32>
When I convert this model to tensorflow-lite it complains:
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
ValueError: None is only supported in the 1st dimension.
Tensor 'x_input_1' has invalid shape '[None, 128, None, 1]'.
I cannot scale my images to a fixed size. The only solution I see is to pad the images to some maximum size and use that one in the graph, but that seems pretty wasteful. Is there any other way to make tensorflow-lite work with dynamic image dimensions? And is there any rationale for this limitation? Thanks.
Yes, you can use dynamic tensors in TF-Lite. The reason why you can't directly set the shape to [None, 128, None, 1]
is because this way, you can easily support more languages in the future. Furthermore, it makes the best use of static memory allocation scheme. This is a smart design choice for a framework that is intended to be used on small devices with low computation power.
Here are the steps on how to dynamically set the tensor's size:
It seems like you're converting from a frozen GraphDef, i.e. a *.pb
file. Suppose your frozen model has input shape [None, 128, None, 1]
.
During this step, set the input size to any valid one that can be accepted by your model. For example:
tflite_convert \
--graph_def_file='model.pb' \
--output_file='model.tflite' \
--input_shapes=1,128,80,1 \ # <-- here, you set an
# arbitrary valid shape
--input_arrays='input' \
--output_arrays='Softmax'
The trick is to use the function interpreter::resize_tensor_input(...)
of the TF-Lite API in real time during inference. I will provide a python implementation of it. The Java and C++ implementation should be the same (as they have similar API):
from tensorflow.contrib.lite.python import interpreter
# Load the *.tflite model and get input details
model = Interpreter(model_path='model.tflite')
input_details = model.get_input_details()
# Your network currently has an input shape (1, 128, 80 , 1),
# but suppose you need the input size to be (2, 128, 200, 1).
model.resize_tensor_input(
input_details[0]['index'], (2, 128, 200, 1))
model.allocate_tensors()
That's it. You can now use that model for images with shape (2, 128, 200, 1)
, as long as your network architecture allows such an input shape. Beware that you will have to do model.allocate_tensors()
every time you do such a reshape, so it will be very inefficient. It is strongly recommended to avoid using this function too much in your program.
The above answer no longer works with newer version of Tensorflow. One should use shape None instead of a dummy shape in conversion step, and then it works by using interpreter.resizeInput(). See here: https://github.com/tensorflow/tensorflow/issues/41807
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With