I'm trying to replicate the results of Fully Convolutional Network (FCN) for Semantic Segmentation using TensorFlow.
I'm stuck on feeding training images into the computation graph. The fully convolutional network used VOC PASCAL dataset for training. However, the training images in the dataset are of varied sizes.
I just want to ask if they preprocessed the training images to make them have the same size and how they preprocessed the images. If not, did they just feed batches of images of different sizes into the FCN? Is it possible to feed images of different sizes in one batch into a computation graph in TensorFlow? Is it possible to do that using queue input rather than placeholder?
It's not possible to feed images of different size into a single input batch. Every batch can have an undefined number of samples (that's the batch size usually, below noted with None
) but every sample must have the same dimensions.
When you train a fully convolutional network you have to train it like a network with fully connected layers at the end. So, every input image in the input batch must have the same widht, height and depth. Resize them.
The only difference is that while fully connected layers output a single output vector for every sample in the input batch (shape [None, num_classes]
) the fully convolutional outputs a probability map of classes.
During train, when the input images dimensions are equals to the network input dimensions, the output will be a probability map with shape [None, 1, 1, num_classes]
.
You can remove the dimensions of size 1 from the output tensor using tf.squeeze
and then calculate the loss and accuracy just like you do with a fully connected network.
At test time, when you feed the network images with dimensions greater than the input, the output will be a probability map with size [None, n, n, num_classes]
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With