Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to set input_array and output_array names in making TensorFlow Lite model

OS Platform and Distribution: Linux Ubuntu 14.04 TensorFlow version: tensorflow (1.4.0) from binary, CUDA/cuDNN version: cuda 8.0

I have trained a customized model with tensorflow and I am trying to make it a tensorflow lite model for mobile apps. My model defines like:

def P_Net(inputs,label=None,bbox_target=None,landmark_target=None,training=True):
    #define common param
    with slim.arg_scope([slim.conv2d],
                        activation_fn=prelu,
                        weights_initializer=slim.xavier_initializer(),
                        biases_initializer=tf.zeros_initializer(),
                        weights_regularizer=slim.l2_regularizer(0.0005), 
                        padding='valid'):
        print inputs.get_shape()
        net = slim.conv2d(inputs, 28, 3, stride=1,scope='conv1')
......
        conv4_1 = slim.conv2d(net,num_outputs=2,kernel_size=[1,1],stride=1,scope='conv4_1',activation_fn=tf.nn.softmax)
        #conv4_1 = slim.conv2d(net,num_outputs=1,kernel_size=[1,1],stride=1,scope='conv4_1',activation_fn=tf.nn.sigmoid)
        
        print conv4_1.get_shape()
        #batch*H*W*4
        bbox_pred = slim.conv2d(net,num_outputs=4,kernel_size=[1,1],stride=1,scope='conv4_2',activation_fn=None)
        print bbox_pred.get_shape()

where conv4_1 and conv4_2 is the output layer.

I freeze the model with:

freeze_graph.freeze_graph('out_put_model/model.pb', '', False, model_path, 'Squeeze,Squeeze_1', '', '', 'out_put_model/frozen_model.pb', '', '')

After that, I could use tensorboard to view graphs. When I read it back to double check it, I get identical info to the checkpoint model.

Then, I try to save the frozen_model.pb to tensorflow lite model. While tensorflow 1.4.0 doesn't have tensorflow lite module, I checkout tensorflow from github and bazel run toco like:

bazel run --config=opt   //tensorflow/contrib/lite/toco:toco --   --input_file='/home/sens/mtcnn_cat/MTCNN-Tensorflow/test/out_put_model/frozen_model.pb'    --output_file='/home/sens/mtcnn_cat/MTCNN-Tensorflow/test/out_put_model/pnet.tflite'    --inference_type=FLOAT   --input_shape=1,128,128,3   --input_array=image_height,image_width,input_image   --output_array=Squeeze,Squeeze_1  --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --dump_graphviz=/tmp

However, I get error about output array not found:

INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco '--input_file=/home/sens/mtcnn_cat/MTCNN-Tensorflow/test/out_put_model/frozen_model.pb' '--output_file=/home/sens/mtcnn_cat/MTCNN-Tensorflow/test/out_put_model/pnet.tflite' '--inference_type=FLOAT' '--input_shape=1,128,128,3' '--input_array=image_height,image_width,input_image' '--output_array=Squeeze,Squeeze_1' '--input_format=TENSORFLOW_GRAPHDEF' '--output_format=TFLITE' '--dump_graphviz=/tmp'
2018-04-03 11:17:37.412589: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Abs
2018-04-03 11:17:37.412660: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Abs
2018-04-03 11:17:37.412699: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Abs
2018-04-03 11:17:37.412880: F tensorflow/contrib/lite/toco/tooling_util.cc:686] Check failed: model.HasArray(output_array) Output array not found: Squeeze,Squeeze_1

Question:

  1. How to set the --output_array=Squeeze,Squeeze_1 parameter? I think it's the same as output nodes in freeze_graph() in tensorboard, I do find the "Squeeze" and "Squeeze_1" nodeenter image description here

  2. How to set the --input_shape=1,128,128,3 --input_array=image_height,image_width,input_image parameter? I check and find the mobile do have a fixed size image input, but in my model, there's not fixed size of input image and fully convolution input like:

         self.image_op = tf.placeholder(tf.float32, name='input_image')
         self.width_op = tf.placeholder(tf.int32, name='image_width')
         self.height_op = tf.placeholder(tf.int32, name='image_height')
         image_reshape = tf.reshape(self.image_op, [1, self.height_op, self.width_op, 3])
    

and a reshape to 1widthheight*3 enter image description here

So, how to write this as input shape?

like image 952
flankechen Avatar asked Apr 03 '18 03:04

flankechen


2 Answers

converting frozen model to tf_lite has never been a easy job, thanks to tensorflow. hope this code can help you summarize the graph and help you find output and input array

bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph={PATH_TO_FROZEN_GRAPH}/optimized_best.pb`
like image 26
Mahesh Avatar answered Sep 29 '22 12:09

Mahesh


For Input array

[node.op.name for node in model.inputs]

For Output array

[node.op.name for node in model.outputs]
like image 152
Raza Avatar answered Sep 29 '22 14:09

Raza