I'm trying to obtain output of an intermediate layer in Keras, Following is my code:
XX = model.input # Keras Sequential() model object
YY = model.layers[0].output
F = K.function([XX], [YY]) # K refers to keras.backend
Xaug = X_train[:9]
Xresult = F([Xaug.astype('float32')])
Running this, I got an Error :
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'dropout_1/keras_learning_phase' with dtype bool
i came to know that because I'm using dropout layer in my model, I have to specify a learning_phase()
flag to my function as per keras documentation.
I changed my code to the following:
XX = model.input
YY = model.layers[0].output
F = K.function([XX, K.learning_phase()], [YY])
Xaug = X_train[:9]
Xresult = F([Xaug.astype('float32'), 0])
Now I'm getting a new Error that I'm unable to figure out:
TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a int into a Tensor.
Any help would be appreciated.
PS : I'm new to TensorFlow and Keras.
Edit 1 : Following is the complete code that I'm using. I'm using Spatial Transformer Network as discussed in this NIPS paper and it's Kera's implementation here
input_shape = X_train.shape[1:]
# initial weights
b = np.zeros((2, 3), dtype='float32')
b[0, 0] = 1
b[1, 1] = 1
W = np.zeros((100, 6), dtype='float32')
weights = [W, b.flatten()]
locnet = Sequential()
locnet.add(Convolution2D(64, (3, 3), input_shape=input_shape, padding='same'))
locnet.add(Activation('relu'))
locnet.add(Convolution2D(64, (3, 3), padding='same'))
locnet.add(Activation('relu'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Convolution2D(128, (3, 3), padding='same'))
locnet.add(Activation('relu'))
locnet.add(Convolution2D(128, (3, 3), padding='same'))
locnet.add(Activation('relu'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Convolution2D(256, (3, 3), padding='same'))
locnet.add(Activation('relu'))
locnet.add(Convolution2D(256, (3, 3), padding='same'))
locnet.add(Activation('relu'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Dropout(0.5))
locnet.add(Flatten())
locnet.add(Dense(100))
locnet.add(Activation('relu'))
locnet.add(Dense(6, weights=weights))
model = Sequential()
model.add(SpatialTransformer(localization_net=locnet,
output_size=(128, 128), input_shape=input_shape))
model.add(Convolution2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(128, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Convolution2D(128, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Convolution2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Convolution2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#==============================================================================
# Start Training
#==============================================================================
#define training results logger callback
csv_logger = keras.callbacks.CSVLogger(training_logs_path+'.csv')
model.fit(X_train, y_train,
batch_size=batch_size,
epochs=20,
validation_data=(X_valid, y_valid),
shuffle=True,
callbacks=[SaveModelCallback(), csv_logger])
#==============================================================================
# Visualize what Transformer layer has learned
#==============================================================================
XX = model.input
YY = model.layers[0].output
F = K.function([XX, K.learning_phase()], [YY])
Xaug = X_train[:9]
Xresult = F([Xaug.astype('float32'), 0])
# input
for i in range(9):
plt.subplot(3, 3, i+1)
plt.imshow(np.squeeze(Xaug[i]))
plt.axis('off')
for i in range(9):
plt.subplot(3, 3, i + 1)
plt.imshow(np.squeeze(Xresult[0][i]))
plt.axis('off')
Machine learning and its implementation A shallow neural network has one intermediate layer (hidden layer) between the input and output layers.
Intermediate layers are the layers between input layer and the output layer. It is sometimes known as Hidden layer where all the computations involved in neural networks are done. Units and activation function in the intermediate layer plays a major role in calculating validation accuracy.
The easiest way is to create a new model in Keras, without calling the backend. You'll need the functional model API for this:
from keras.models import Model
XX = model.input
YY = model.layers[0].output
new_model = Model(XX, YY)
Xaug = X_train[:9]
Xresult = new_model.predict(Xaug)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With