Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can Google's AutoML export trained models for offline inference?

AutoML seems great. One big question is that - can we export the trained model for offline inference, such as with tensorflow or tensoflow lite?

like image 717
user4572254 Avatar asked Aug 15 '18 22:08

user4572254


People also ask

Which type of machine learning model does cloud AutoML integrate with?

AutoML Tabular Automatically build and deploy state-of-the-art machine learning models on structured data. Try it now.

How good is Google AutoML?

Trying out Google AutoML AutoML yielded an impressive 81.3% precision and 78.9% recall on this task, without requiring a single line of code! It also provides an API to access the model we created for use in generating predictions.


2 Answers

This is not supported as of March 2019. If you are interested in this feature, star this request: https://issuetracker.google.com/issues/113122585

Also check that link in case Google has implemented the feature since this answer.

Update: initial support has been added for classification, but not yet detection. See Peter Gibson's answer.

like image 91
N8allan Avatar answered Sep 30 '22 03:09

N8allan


EDIT: It's now posible to export both Image Classification and Object Detection Models. See https://cloud.google.com/vertex-ai/docs/export/export-edge-model#object-detection

Original Answer Follows

Current status (August 2019) for AutoML Vision is that you can export AutoML image classification models but not object detection. This feature is in beta (as is AutoML Vision itself). I couldn't find details for other AutoML products and haven't tried them myself, so I'm unsure of their status.

From https://cloud.google.com/vision/automl/docs/

AutoML Vision Edge now allows you to export your custom trained models.

  • AutoML Vision Edge allows you to train and deploy low-latency, high accuracy models optimized for edge devices.
  • With Tensorflow Lite, Core ML, and container export formats, AutoML Vision Edge supports a variety of devices.
  • Hardware architectures supported: Edge TPUs, ARM and NVIDIA.
  • To build an application on iOS or Android devices you can use AutoML Vision Edge in ML Kit. This solution is available via Firebase and offers an end-to-end development flow for creating and deploying custom models to mobile devices using ML Kit client libraries.

Documentation https://cloud.google.com/vision/automl/docs/edge-quickstart

I trained a classification model, exported the tflite model (it exports to Cloud storage), and was able to download the model files and load them into tensorflow using the Python API without too much hassle. Here's the relevant code for loading the model and running inference:

Based on https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python

# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=MODEL_PATH)
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

def predict(frame):
    interpreter.set_tensor(input_details[0]['index'], frame)
    interpreter.invoke()

    # The function `get_tensor()` returns a copy of the tensor data.
    # Use `tensor()` in order to get a pointer to the tensor.
    output_data = interpreter.get_tensor(output_details[0]['index'])
like image 36
Peter Gibson Avatar answered Sep 30 '22 01:09

Peter Gibson