Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How convert a jpeg image into json file in Google machine learning

I'm working on Google cloud ML, and I want to get prediction on jpeg image. To do this, I would like to use:

gcloud beta ml predict --instances=INSTANCES --model=MODEL [--version=VERSION]

(https://cloud.google.com/ml/reference/commandline/predict)

Instances is the path to a json file with all info about image. How can I create the json file from my jpeg image?

Many thanks!!

like image 686
Davide Biraghi Avatar asked Jan 05 '23 00:01

Davide Biraghi


1 Answers

The first step is to make sure that the graph you export has a placeholder and ops that can accept JPEG data. Note that CloudML assumes you are sending a batch of images. We have to use a tf.map_fn to decode and resize a batch of images. Depending on the model, extra preprocessing of the data may be required to normalize the data, etc. This is shown below:

# Number of channels in the input image
CHANNELS = 3

# Dimensions of resized images (input to the neural net)
HEIGHT = 200
WIDTH = 200

# A placeholder for a batch of images
images_placeholder = tf.placeholder(dtype=tf.string, shape=(None,))

# The CloudML Prediction API always "feeds" the Tensorflow graph with
# dynamic batch sizes e.g. (?,).  decode_jpeg only processes scalar
# strings because it cannot guarantee a batch of images would have
# the same output size.  We use tf.map_fn to give decode_jpeg a scalar
# string from dynamic batches.
def decode_and_resize(image_str_tensor):
  """Decodes jpeg string, resizes it and returns a uint8 tensor."""

  image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)

  # Note resize expects a batch_size, but tf_map supresses that index,
  # thus we have to expand then squeeze.  Resize returns float32 in the
  # range [0, uint8_max]
  image = tf.expand_dims(image, 0)
  image = tf.image.resize_bilinear(
      image, [HEIGHT, WIDTH], align_corners=False)
  image = tf.squeeze(image, squeeze_dims=[0])
  image = tf.cast(image, dtype=tf.uint8)
  return image

decoded_images = tf.map_fn(
    decode_and_resize, images_placeholder, back_prop=False, dtype=tf.uint8)

# convert_image_dtype, also scales [0, uint8_max] -> [0, 1).
images = tf.image.convert_image_dtype(decoded_images, dtype=tf.float32)

# Then shift images to [-1, 1) (useful for some models such as Inception)
images = tf.sub(images, 0.5)
images = tf.mul(images, 2.0)

# ...

Also, we need to be sure to properly mark the inputs, in this case, it's essential that the name of the input (the key in the map) end in _bytes. When sending base64 encoded data, it will let the CloudML prediction service know it needs to decode the data:

inputs = {"image_bytes": images_placeholder.name}
tf.add_to_collection("inputs", json.dumps(inputs))

The data format that the gcloud command is expecting will be of the form:

{"image_bytes": {"b64": "dGVzdAo="}}

(Note, if image_bytes is the only input to your model you can simplify to just {"b64": "dGVzdAo="}).

For example, to create this from a file on disk, you could try something like:

echo "{\"image_bytes\": {\"b64\": \"`base64 image.jpg`\"}}" > instances

And then send it to the service like so:

gcloud beta ml predict --instances=instances --model=my_model

Please note that when sending data directly to the service, the body of the request you send needs to be wrapped in an "instances" list. So the gcloud command above actually sends the following to the service in the body of the HTTP request:

{"instances" : [{"image_bytes": {"b64": "dGVzdAo="}}]}
like image 116
rhaertel80 Avatar answered Jan 13 '23 13:01

rhaertel80