Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

gcloud ml-engine returns error on large files

I have a trained model that takes in a somewhat large input. I generally do this as a numpy array of the shape (1,473,473,3). When I put that to JSON I end up getting about a 9.2MB file. Even if I convert that to a base64 encoding for the JSON file the input is still rather large.

ml-engine predict rejects my request when sending the JSON file with the following error:

(gcloud.ml-engine.predict) HTTP request failed. Response: {
"error": {
    "code": 400,
    "message": "Request payload size exceeds the limit: 1572864 bytes.",
    "status": "INVALID_ARGUMENT"
  }
}

It looks like I can't send anything over about 1.5MB in size to ML-engine. Is this for sure a thing? How do others get around doing online predictions for large data? Do I have to spin up a compute-engine or will I run into the same issue there?

Edit:

I am starting from a Keras model and trying to export to tensorflow serving. I load my Keras model into a variable named 'model' and have a defined directory "export_path". I build the tensorflow serving model like this:

signature = predict_signature_def(inputs={'input': model.input},
                                outputs={'output': model.output})
builder = saved_model_builder.SavedModelBuilder(export_path)
builder.add_meta_graph_and_variables(
    sess=sess,
    tags=[tag_constants.SERVING],
    signature_def_map={
        signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature
    }
)
builder.save()

How would the input look for this signature_def? Would the JSON just be something like {'input': 'https://storage.googleapis.com/projectid/bucket/filename'} where the file is the (1,473,473,3) numpy array?

2nd Edit: Looking at the code posted by Lak Lakshmanan, I have tried a few different variations without success to read an image url and attempt to parse the file that way. I have tried the following without success:

inputs = {'imageurl': tf.placeholder(tf.string, shape=[None])}
filename = tf.squeeze(inputs['imageurl']) 
image = read_and_preprocess(filename)#custom preprocessing function
image = tf.placeholder_with_default(image, shape=[None, HEIGHT, WIDTH, NUM_CHANNELS])
features = {'image' : image}
inputs.update(features)
signature = predict_signature_def(inputs= inputs,
                                outputs={'output': model.output})


with K.get_session() as session:
    """Convert the Keras HDF5 model into TensorFlow SavedModel."""
    builder = saved_model_builder.SavedModelBuilder(export_path)
    builder.add_meta_graph_and_variables(
        sess=session,
        tags=[tag_constants.SERVING],
        signature_def_map={
            signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature
        }
    )
    builder.save()

I believe the problem is with getting a mapping from the imageurl placeholder towards building the features. Thoughts on what I am doing wrong?

like image 730
Chase Midler Avatar asked Nov 27 '17 15:11

Chase Midler


2 Answers

What I typically do is to have the json refer to a file in Google Cloud Storage. Users would first have to upload their file to gcs and then invoke prediction. But this approach has other advantages, since the storage utilities allow for parallel and multithreaded uploads.

Keras/TensorFlow 2.0

In TensorFlow 2.0, this is what the serving function will look like:

@tf.function(input_signature=[tf.TensorSpec([None,], dtype=tf.string)])
def predict_bytes(img_bytes):
    input_images = tf.map_fn(
        preprocess,
        img_bytes,
        fn_output_signature=tf.float32
    )
    batch_pred = model(input_images) # same as model.predict()
    top_prob = tf.math.reduce_max(batch_pred, axis=[1])
    pred_label_index = tf.math.argmax(batch_pred, axis=1)
    pred_label = tf.gather(tf.convert_to_tensor(CLASS_NAMES), pred_label_index)
    return {
        'probability': top_prob,
        'flower_type_int': pred_label_index,
        'flower_type_str': pred_label
    }

@tf.function(input_signature=[tf.TensorSpec([None,], dtype=tf.string)])
def predict_filename(imageurl):
    img_bytes = tf.map_fn(
        tf.io.read_file,
        filenames
    )
    result = predict_bytes(img_bytes)
    result['filename'] = filenames
    return result

shutil.rmtree('export', ignore_errors=True)
os.mkdir('export')
model.save('export/flowers_model3',
          signatures={
              'serving_default': predict_filename,
              'from_bytes': predict_bytes
          })

full code is here: https://nbviewer.jupyter.org/github/GoogleCloudPlatform/practical-ml-vision-book/blob/master/09_deploying/09d_bytes.ipynb

TensorFlow 1.0

In TensorFlow 1.0, the code will look like this:

def serving_input_fn():
    # Note: only handles one image at a time ... 
    inputs = {'imageurl': tf.placeholder(tf.string, shape=())}
    filename = tf.squeeze(inputs['imageurl']) # make it a scalar
    image = read_and_preprocess(filename)
    # make the outer dimension unknown (and not 1)
    image = tf.placeholder_with_default(image, shape=[None, HEIGHT, WIDTH, NUM_CHANNELS])

features = {'image' : image}
return tf.estimator.export.ServingInputReceiver(features, inputs)

full code here: https://github.com/GoogleCloudPlatform/training-data-analyst/blob/61ab2e175a629a968024a5d09e9f4666126f4894/courses/machine_learning/deepdive/08_image/flowersmodel/trainer/model.py#L119

like image 117
Lak Avatar answered Sep 24 '22 00:09

Lak


I encountered the same error when trying to run predictions on AI Platform with large images. I solved the payload limit problem by first encoding images to PNG format before sending them to the AI Platform.

My Keras model doesn't take PNG encoded images as an input, so I needed to convert the Keras model to a Tensorflow Estimator and define its serving input function containing the code to decode the PNG encoded images back to the format my model expects.

Example code when the model expects two different grayscale images as an input:

import tensorflow as tf
from tensorflow.keras.estimator import model_to_estimator
from tensorflow.estimator.export import ServingInputReceiver

IMG_PNG_1 = "encoded_png_image_1"
IMG_PNG_2 = "encoded_png_image_2"


def create_serving_fn(image_height, image_width):
    def serving_input_fn():
        def preprocess_png(png_encoded_img):
            img = tf.reshape(png_encoded_img, shape=())
            img = tf.io.decode_png(img, channels=1)
            img = img / 255
            img = tf.expand_dims(img, axis=0)
            return img

        # receiver_tensors worked only when the shape parameter wasn't defined
        receiver_tensors = {
            IMG_PNG_1: tf.compat.v1.placeholder(tf.string),
            IMG_PNG_2: tf.compat.v1.placeholder(tf.string)
        }

        img_1 = preprocess_png(png_encoded_img=receiver_tensors[IMG_PNG_1])
        img_2 = preprocess_png(png_encoded_img=receiver_tensors[IMG_PNG_2])

        input_img_1 = tf.compat.v1.placeholder_with_default(img_1, shape=[None, image_height, image_width, 1])
        input_img_2 = tf.compat.v1.placeholder_with_default(img_2, shape=[None, image_height, image_width, 1])

        features = {
            "model_input_1": input_img_1,
            "model_input_2": input_img_2,
        }

        return ServingInputReceiver(features=features, receiver_tensors=receiver_tensors)

    return serving_input_fn

# Convert trained Keras model to Estimator
estimator = model_to_estimator(keras_model=model)
save_path = "location_of_the_SavedModel"
export_path = estimator.export_saved_model(
    export_dir_base=save_path,
    serving_input_receiver_fn=create_serving_fn(1000, 1000)
)
like image 36
janiemi Avatar answered Sep 25 '22 00:09

janiemi