Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Visualize output of each layer in theano Convolutional MLP

I am reading Convolutional Neural Networks tutorial. I want to visualize output of each layer after model is trained. For example in function "evaluate_lenet5" I want to pass a instance (which is an image) to the network and see the output of each layer and the class that trained Neural Network set for the input. I thought it may be easy as doing a dot product on an image and Weight vector of each layer, but it did not work at all.

I have objects of each layer as :

# Reshape matrix of rasterized images of shape (batch_size, 28 * 28)
# to a 4D tensor, compatible with our LeNetConvPoolLayer
# (28, 28) is the size of MNIST images.
layer0_input = x.reshape((batch_size, 1, 28, 28))

# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
# maxpooling reduces this further to (24/2, 24/2) = (12, 12)
# 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
layer0 = LeNetConvPoolLayer(
    rng,
    input=layer0_input,
    image_shape=(batch_size, 1, 28, 28),
    filter_shape=(nkerns[0], 1, 5, 5),
    poolsize=(2, 2)
)

# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
# maxpooling reduces this further to (8/2, 8/2) = (4, 4)
# 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)
layer1 = LeNetConvPoolLayer(
    rng,
    input=layer0.output,
    image_shape=(batch_size, nkerns[0], 12, 12),
    filter_shape=(nkerns[1], nkerns[0], 5, 5),
    poolsize=(2, 2)
)

# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size, num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
# or (500, 50 * 4 * 4) = (500, 800) with the default values.
layer2_input = layer1.output.flatten(2)

# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(
    rng,
    input=layer2_input,
    n_in=nkerns[1] * 4 * 4,
    n_out=500,
    activation=T.tanh
)

# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)

So can you suggest a way to visualize a sample of processing an image step by step after the neural network is trained?

like image 527
RezKesh Avatar asked Jan 29 '16 20:01

RezKesh


1 Answers

This isn't so hard. If you are using the same class definition of LeNetConvPoolLayer from the theano deep-learning tutorial, then you just need to compile a function with x as input and [LayerObject].output as output (where LayerObject can be any layer object like layer0, layer1 etc. whichever layer you want to visualize.

vis_layer1 = function([x], [layer1.output])

Pass a (or many) sample (exactly how you fed the input tensor while training) and you'll get the output of that particular layer for which your function is compiled for.

Note: In this way you will get the outputs in the exact same shape the model has used in calculation. However you can reshape it as you want by reshaping the output variable like layer1.output.flatten(n).

like image 130
ayandas Avatar answered Oct 13 '22 22:10

ayandas