Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use Tensorflow inference models to generate deepdream like images

I am using a custom image set to train a neural network using Tensorflow API. After successful training process I get these checkpoint files containing values of different training var. I now want to get an inference model from these checkpoint files, I found this script which does that, which I can then use to generate deepdream images as explained in this tutorial. The problem is when I load my model using:

import tensorflow as tf
model_fn = 'export'

graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
  graph_def = tf.GraphDef()
  graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input')
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})

I get this error:

graph_def.ParseFromString(f.read())

self.MergeFromString(serialized)

raise message_mod.DecodeError('Unexpected end-group tag.') google.protobuf.message.DecodeError: Unexpected end-group tag.

The script expect a protocol buffer file, I am not sure the script I am using to generate inference models is giving me proto buffer files or not.

Can someone please suggest what am I doing wrong, or is there a better way to achieve this. I simply want to convert checkpoint files generated by tensor to proto buffer.

Thanks

like image 258
Umer Avatar asked Aug 01 '16 12:08

Umer


People also ask

How do I perform inference in TensorFlow Lite?

TensorFlow Lite inference typically follows the following steps: You must load the .tflite model into memory, which contains the model's execution graph. Raw input data for the model generally does not match the input data format expected by the model.

What is a pixelrnn in TensorFlow?

Building and training your first TensorFlow model. We mentioned earlier that the PixelRNN is a generative model. A generative model attempts to model the joint probability distribution of the data we feed in. In the context of PixelRNN, this basically means we want to model all of the possible realistic images as compactly as possible.

How do I convert a TensorFlow model to TensorFlow Lite format?

As an alternative to loading the model as a pre-converted .tflite file, you can combine your code with the TensorFlow Lite Converter Python API ( tf.lite.TFLiteConverter ), allowing you to convert your TensorFlow model into the TensorFlow Lite format and then run inference: const = tf.constant( [1., 2., 3.]) + tf.constant( [1., 4., 4.])

Can TensorFlow magenta learn the distribution of MNIST images?

The TensorFlow Magenta team has an excellent review that explains the mathematics behind this algorithm at a higher level than the paper. What we’ve shown here is a benchmark with a very simple data set using a relatively fast model that can learn the distribution of MNIST images.


1 Answers

The link to the script you ran is broken, but in any case the recommended thing is not to try to generate an inference model from a checkpoint, but rather to embed code at the end of your training program that will emit a "SavedModel" export (which is not the same thing as a checkpoint).

Please see [1], and in particular the heading "Building a Saved Model". Note that a Saved Model constitutes multiple files, one of which is indeed a protocol buffer (which directly answers your question I hope); the others are variable files and (optional) asset files.

[1] https://www.tensorflow.org/programmers_guide/saved_model

like image 150
Christopher Olston Avatar answered Oct 09 '22 03:10

Christopher Olston