Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to generate CNN heatmaps using built-in Keras in TF2.0 (tf.keras)

I used to generate heatmaps for my Convolutional Neural Networks, based on the stand-alone Keras library on top of TensorFlow 1. That worked fine, however, after my switch to TF2.0 and built-in tf.keras implementation (with eager execution) I cannot use my old heatmap generation code any longer.

So I re-wrote parts of my code for TF2.0 and ended up with the following:

from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.models import load_model

from tensorflow.keras import preprocessing
from tensorflow.keras import backend as K
from tensorflow.keras import models

import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np

image_size = 150
image_path = "/tmp/images/test-image.jpg"
model_path = "/tmp/models/prototype/basic_vgg16.h5"

# Load pre-trained Keras model and the image to classify
model = load_model(model_path)  # VGG16 CNN with custom classifier head
image = load_img(image_path, target_size=(image_size, image_size))
img_tensor = preprocessing.image.img_to_array(image)
img_tensor = np.expand_dims(img_tensor, axis=0)
img_tensor = preprocess_input(img_tensor)

input_layer = model.get_layer("model_input")
conv_layer = model.get_layer("block5_conv3")
heatmap_model = models.Model([model.inputs], [conv_layer.output, model.output])

# Get gradient of the winner class w.r.t. the output of the (last) conv. layer
with tf.GradientTape() as gtape:
    conv_output, predictions = heatmap_model(img_tensor)
    loss = predictions[:, np.argmax(predictions[0])]
    grads = gtape.gradient(loss, conv_output)
    pooled_grads = K.mean(grads, axis=(0, 1, 2))

# Get values of pooled grads and model conv. layer output as Numpy arrays
iterate = K.function([model.inputs], [pooled_grads, conv_layer.output[0]])
pooled_grads_value, conv_layer_output_value = iterate([img_tensor])

# Multiply each channel in the feature-map array by "how important it is"
for i in range(pooled_grads_value.shape[0]):
    conv_layer_output_value[:, :, i] *= pooled_grads_value[i]

# Channel-wise mean of resulting feature-map is the heatmap of class activation
heatmap = np.mean(conv_layer_output_value, axis=-1)
heatmap = np.maximum(heatmap, 0)
max_heat = np.max(heatmap)
if max_heat == 0:
    max_heat = 1e-10
heatmap /= max_heat

# Render heatmap via pyplot
plt.matshow(heatmap)
plt.show()

But now the following line:

iterate = K.function([model.inputs], [pooled_grads, conv_layer.output[0]])

leads to this error message:

AttributeError: Tensor.op is meaningless when eager execution is enabled.

I always used Keras and did not work with TF directly, so I am bit lost here.
Any ideas what could be the problem here?


PS: If you want to c&p this code, you can create the VGG16-based model like so:

# Create Keras model from pre-trained VGG16 and custom classifier
input_layer = layers.Input(shape=(image_size, image_size, 3), name="model_input")
vgg16_model = VGG16(weights="imagenet", include_top=False, input_tensor=input_layer)
model_head = vgg16_model.output
model_head = layers.Flatten(name="model_head_flatten")(model_head)
model_head = layers.Dense(256, activation="relu")(model_head)
model_head = layers.Dense(3, activation="softmax")(model_head)
model = models.Model(inputs=input_layer, outputs=model_head)
model.compile(loss="categorical_crossentropy", optimizer=optimizers.Adam(), metrics=["accuracy"])
like image 256
Matthias Avatar asked Oct 10 '19 11:10

Matthias


People also ask

How do you visualize the heatmaps of class activation explain in detail?

To visualize the heatmap, we will use a technique called Grad-CAM (Gradient Class Activation Map). The idea behind it is quite simple; to find the importance of a certain class in our model, we simply take its gradient with respect to the final convolutional layer and then weigh it against the output of this layer.

What is TF in keras?

tf. keras is the Tensorflow specific implementation of the Keras API specification. It adds the framework the support for many Tensorflow specific features like: perfect support for tf. data. Dataset as input objects, support for eager execution, ...


1 Answers

At the end of the GradientTape loop, conv_output and grads already holds the value. The iterate function is no longer need to compute the values.

Working example below:

from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.models import load_model

from tensorflow.keras import preprocessing
from tensorflow.keras import backend as K
from tensorflow.keras import models

import tensorflow as tf
import numpy as np

image_size = 224

# Load pre-trained Keras model and the image to classify
model = tf.keras.applications.vgg16.VGG16()
image = np.random.random((image_size, image_size, 3))
img_tensor = preprocessing.image.img_to_array(image)
img_tensor = np.expand_dims(img_tensor, axis=0)
img_tensor = preprocess_input(img_tensor)

conv_layer = model.get_layer("block5_conv3")
heatmap_model = models.Model([model.inputs], [conv_layer.output, model.output])

# Get gradient of the winner class w.r.t. the output of the (last) conv. layer
with tf.GradientTape() as gtape:
    conv_output, predictions = heatmap_model(img_tensor)
    loss = predictions[:, np.argmax(predictions[0])]
    grads = gtape.gradient(loss, conv_output)
    pooled_grads = K.mean(grads, axis=(0, 1, 2))

heatmap = tf.reduce_mean(tf.multiply(pooled_grads, conv_output), axis=-1)
heatmap = np.maximum(heatmap, 0)
max_heat = np.max(heatmap)
if max_heat == 0:
    max_heat = 1e-10
heatmap /= max_heat

print(heatmap.shape)
like image 87
Raphael Meudec Avatar answered Nov 01 '22 18:11

Raphael Meudec