I had converted my model to tensorflow-lite but when compiling I get the following error:
Here is my code:
interpreter = tf.contrib.lite.Interpreter(model_path= "/mnt/ficusspain/cqli/tensorflow_models/Quantized_Models/mobilenet_v1_0.25_128_quant/mobilenet_v1_0.25_128_quant.tflite")
interpreter.allocate_tensors()
print("can we get here?")
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print("can we get here")
# Test model on random input data.
input_shape = input_details[0]['shape']
print(input_shape)
print(input_details[0]['index'])
print(output_details[0]['index'])
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
You need change the dtype from np.float32 to np.uint8:
input_data = np.array(np.random.random_sample(input_shape), dtype=np.uint8)
You can always check with
print(interpreter.get_input_details())
which dtype is required
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With