Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to fix "There is at least 1 reference to internal data in the interpreter in the form of a numpy array or slice" and run inference on tf.lite

I'm trying to run inference using tf.lite on an mnist keras model that I optimized by doing post-training-quantization according to this

RuntimeError: There is at least 1 reference to internal data
in the interpreter in the form of a numpy array or slice. Be sure to
only hold the function returned from tensor() if you are using raw
data access.

It happens after I resize either the images to be in 4 dimensions, or the interpreter itself as seen in the commented line; since the error before this was something like "expected 4 dimensions but found 3". Here is the code:

import tensorflow as tf
tf.enable_eager_execution()
import numpy as np
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt
%matplotlib inline

mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
images, labels = tf.cast(mnist_test[0], tf.float32)/255.0, mnist_test[1]
images = np.reshape(images,[images.shape[0],images.shape[1],images.shape[2],1])
mnist_ds = tf.data.Dataset.from_tensor_slices((images, labels)).batch(1, drop_remainder = True)

interpreter = tf.lite.Interpreter(model_path="C:\\Users\\USER\\Documents\\python\\converted_quant_model_cnn_5_100.tflite")
#tf.lite.Interpreter.resize_tensor_input(interpreter, input_index="index" , tensor_size=([1,28,28,1]) )

interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]

for img, label in mnist_ds.take(1):
  break
#print(img.get_shape)
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
like image 744
theroguecode Avatar asked Jun 26 '19 16:06

theroguecode


2 Answers

Just to add what solved it for me. I am using scripts, so it is not related to Jupyter Notebooks.

My problem was that I was using predictions = interpreter.tensor(output_index) instead predictions = interpreter.get_tensor(output_index).

However, the problem appeared as the same error commented in this thread.

like image 92
BCJuan Avatar answered Nov 13 '22 04:11

BCJuan


I was facing the same issue while running inference on a tflite model. When traced back, I ended up reading the function in which this runtime error occurs.

The functions responsible for this raising this error are:

def _ensure_safe(self)

and

def _safe_to_run(self)

The function "_safe_to_run()" is called from within the function "_ensure_safe()". _safe_to_run() function either returns True of False. When it return False the above runtime error occurs.

It returns False when there exist numpy array buffers. This means it is not safe to run tflite calls that may destroy (or alter) internally allocated memory.

So for "_ensure_safe()" function to not raise this runtime error we have to make sure that no numpy arrays pointing to internal buffers are active.

Also, for more clarity note that the function "_ensure_safe()" should be called from any function that will call a function on _interpreter that may reallocate memory. Thus when you call the function

interpreter.allocate_tensors()

as you have mentioned in the code above, the first thing that this "interpreter.allocate_tensors()" function does internally is call the "_ensure_safe()" funciton as the "interpreter.allocate_tensors()" involves altering the internal allocated memory (in this case altering means "allocating" as the name suggests). The other example where "_ensure_safe()" is also called is when "invoke()" function is called. And there are many such functions, but you get the idea.

Now that the root cause and working is known, to overcome this runtime error i.e to have no numpy arrays pointing to internal buffers, we have to clear them.

To clear them:

a). Either shutdown you jupyter notebook and restart the kernel, as this will clear all numpy arrays/slices

b). Or simply load the model again i.e run this line again in you jupyter notebook:

interpreter = tf.lite.Interpreter(model_path="C:\\Users\\USER\\Documents\\python\\converted_quant_model_cnn_5_100.tflite")

This hopefully solves your problem, I assure you it did for me.

If both of these options does not, then in the above explanation I have pointed out "why" this error occurs. So if you find out other ways of "having no numpy arrays pointing to internal buffers", do share.

Reference: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/python/interpreter.py

like image 15
Sushanth Avatar answered Nov 13 '22 04:11

Sushanth