I am starting with Tensorflow 2.0
and trying to implement Guided BackProp to display Saliency Map. I started by computing the loss between y_pred
and y_true
of an image, then find gradients of all layers due to this loss.
with tf.GradientTape() as tape:
logits = model(tf.cast(image_batch_val, dtype=tf.float32))
print('`logits` has type {0}'.format(type(logits)))
xentropy = tf.nn.softmax_cross_entropy_with_logits(labels=tf.cast(tf.one_hot(1-label_batch_val, depth=2), dtype=tf.int32), logits=logits)
reduced = tf.reduce_mean(xentropy)
grads = tape.gradient(reduced, model.trainable_variables)
However, I don't know what to do with gradients in order to obtain the Guided Propagation.
This is my model. I created it using Keras layers:
image_input = Input((input_size, input_size, 3))
conv_0 = Conv2D(32, (3, 3), padding='SAME')(image_input)
conv_0_bn = BatchNormalization()(conv_0)
conv_0_act = Activation('relu')(conv_0_bn)
conv_0_pool = MaxPool2D((2, 2))(conv_0_act)
conv_1 = Conv2D(64, (3, 3), padding='SAME')(conv_0_pool)
conv_1_bn = BatchNormalization()(conv_1)
conv_1_act = Activation('relu')(conv_1_bn)
conv_1_pool = MaxPool2D((2, 2))(conv_1_act)
conv_2 = Conv2D(64, (3, 3), padding='SAME')(conv_1_pool)
conv_2_bn = BatchNormalization()(conv_2)
conv_2_act = Activation('relu')(conv_2_bn)
conv_2_pool = MaxPool2D((2, 2))(conv_2_act)
conv_3 = Conv2D(128, (3, 3), padding='SAME')(conv_2_pool)
conv_3_bn = BatchNormalization()(conv_3)
conv_3_act = Activation('relu')(conv_3_bn)
conv_4 = Conv2D(128, (3, 3), padding='SAME')(conv_3_act)
conv_4_bn = BatchNormalization()(conv_4)
conv_4_act = Activation('relu')(conv_4_bn)
conv_4_pool = MaxPool2D((2, 2))(conv_4_act)
conv_5 = Conv2D(128, (3, 3), padding='SAME')(conv_4_pool)
conv_5_bn = BatchNormalization()(conv_5)
conv_5_act = Activation('relu')(conv_5_bn)
conv_6 = Conv2D(128, (3, 3), padding='SAME')(conv_5_act)
conv_6_bn = BatchNormalization()(conv_6)
conv_6_act = Activation('relu')(conv_6_bn)
flat = Flatten()(conv_6_act)
fc_0 = Dense(64, activation='relu')(flat)
fc_0_bn = BatchNormalization()(fc_0)
fc_1 = Dense(32, activation='relu')(fc_0_bn)
fc_1_drop = Dropout(0.5)(fc_1)
output = Dense(2, activation='softmax')(fc_1_drop)
model = models.Model(inputs=image_input, outputs=output)
I am glad to provide more code if needed.
First of all, you have to change the computation of the gradient through a ReLU, i.e.
Here a graphic example from the paper.
This formula can be implemented with the following code:
@tf.RegisterGradient("GuidedRelu")
def _GuidedReluGrad(op, grad):
gate_f = tf.cast(op.outputs[0] > 0, "float32") #for f^l > 0
gate_R = tf.cast(grad > 0, "float32") #for R^l+1 > 0
return gate_f * gate_R * grad
Now you have to override the original TF implementation of ReLU with:
with tf.compat.v1.get_default_graph().gradient_override_map({'Relu': 'GuidedRelu'}):
#put here the code for computing the gradient
After computing the gradient, you can visualize the result. However, one last remark. You compute a visualization for a single class. This means, you take the activation of a choosen neuron and set all the activations of the other neurons to zero for the input of Guided BackProp.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With