I am designing a deep CNN classifiers for urban features detection. Most of time my network classifies and segment out building properly but many time it gets confused due to illumination / similar appearance etc. with other objects.
I want to create a color map along with segmented image that can represent how certain classifier is? I have used softmaxwith loss for training network.
layer {
name: "score"
type: "Deconvolution"
bottom: "pool_3"
top: "score"
convolution_param {
num_output: 2
bias_term: false
pad:2
kernel_size: 8
stride: 4
}
}
I am expecting output similar to this color map image:
My issues are
Note: Currently, I am able to get color map using entropy.
You might want to perform an occlusion sensitivity experiment that allows to build a heat map of most important image areas.
From this answer on AI StackExchange:
Here's the idea. Suppose that a ConvNet classifies an image as a dog. How can we be certain that it’s actually picking up on the dog in the image as opposed to some contextual cues from the background or some other miscellaneous object?
One way of investigating which part of the image some classification prediction is coming from is by plotting the probability of the class of interest (e.g. dog class) as a function of the position of an occluder object. If we iterate over regions of the image, replace it with all zeros and check the classification result, we can build a 2-dimensional heat map of what's most important for the network on a particular image.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With