This question is somewhat an extension of How can I use values read from TFRecords as arguments to tf.reshape?
I cast my images into a certain shape with the following code:
height = tf.cast(features['height'],tf.int32)
width = tf.cast(features['width'],tf.int32)
image = tf.reshape(image,tf.pack([height, width, 3]))
In cifar10_input code, the image is then distorted with the following, where IMAGE_SIZE = 32:
height = IMAGE_SIZE
width = IMAGE_SIZE
distorted_image = tf.random_crop(image, [height, width, 3])
However, for my purposes, I don't need to do a random crop now. As such, I replaced that line with:
distorted_image = image
When I do this, it throws the following error:
Traceback (most recent call last):
File "cnn_train.py", line 128, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/default/_app.py", line 30, in run
sys.exit(main(sys.argv))
File "cnn_train.py", line 124, in main
train()
File "cnn_train.py", line 56, in train
images, labels = cnn.distorted_inputs()
File "/home/samuelchin/tensorflow/my_code/CNN/cnn.py", line 123, in distorted_inputs
batch_size=BATCH_SIZE)
File "/home/samuelchin/tensorflow/my_code/CNN/cnn_input.py", line 128, in distorted_inputs
min_queue_examples, batch_size)
File "/home/samuelchin/tensorflow/my_code/CNN/cnn_input.py", line 70, in _generate_image_and_label_batch
min_after_dequeue=min_queue_examples)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 494, in shuffle_batch
dtypes=types, shapes=shapes)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 404, in __init__
shapes = _as_shape_list(shapes, dtypes)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 70, in _as_shape_list
raise ValueError("All shapes must be fully defined: %s" % shapes)
ValueError: All shapes must be fully defined: [TensorShape([Dimension(None), Dimension(None), Dimension(None)]), TensorShape([])]
I have 2 questions:
Because you're generating your image dynamically, including pulling out the height & width dynamically from the tf record file, TensorFlow doesn't know the shape of the resulting image. Many of the later ops in the pipeline need to be able to determine the shape at the time Python executes.
The tf.random_crop
has the incidental effect of setting the image size to a known, fixed size, and leaving its shape exposed for subsequent processing.
You can just slice the image to the size you want instead of doing a random_crop, but you need to perform some operation to turn the image into a fixed-size thing. If you want it to be 32x32 and you know that your input height and width are 32x32, then you can just do set_shape on it (but you'd better be right). Otherwise, you can crop and/or resize to the size you want.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With