I am currently implementing FCN in tensorflow that enables variable input image size.
I have images of really various image sizes, but unfortunately I am not able to start the training with batch size different than 1.
I am using the feed dict in a following way:
feed_dict = {fcn.images: image_batch,
fcn.labels: labels_batch,
fcn.dropout_keep: dropout}
result = sess.run(list(tf_ops), feed_dict=feed_dict)
I have already tried:
image_batch
and labels_batch
as numpy array, this however does not work since numpy arrays does not support variable certain dimensions.image_batch
and labels_batch
as list of numpy arrays. Here seems that tensorflow is trying to call numpy.array(image_batch)
.tf.pack()
, this unfortunately does not support different image sizes as wellMy question is: Is there a way how to solve this problem?
Thank you in advance for any suggestions and advices.
So we can close this - quoting Olivier Moindrot above:
You have to pad or resize all your images to the same size before batching them.
Note that after Olivier's answer, there was a new tf.image.decode_and_crop_jpeg
op added that can make it a bit easier to do this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With