I want to manipulate the activations of the previous layer with a custom keras layer. The below layer simply multiplies a number with the activations of the previous layer.
class myLayer(Layer):
def __init__(self, **kwargs):
super(myLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.output_dim = input_shape[0][1]
super(myLayer, self).build(input_shape)
def call(self, inputs, **kwargs):
if not isinstance(inputs, list):
raise ValueError('This layer should be called on a list of inputs.')
mainInput = inputs[0]
nInput = inputs[1]
changed = tf.multiply(mainInput,nInput)
forTest = changed
forTrain = inputs[0]
return K.in_train_phase(forTrain, forTest)
def compute_output_shape(self, input_shape):
print(input_shape)
return (input_shape[0][0], self.output_dim)
I am creating the model as
inputTensor = Input((5,))
out = Dense(units, input_shape=(5,),activation='relu')(inputTensor)
n = K.placeholder(shape=(1,))
auxInput = Input(tensor=n)
out = myLayer()([out, auxInput])
out = Dense(units, activation='relu')(out)
out = Dense(3, activation='softmax')(out)
model = Model(inputs=[inputTensor, auxInput], outputs=out)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics='acc'])
I get this error when I try to use
model.fit(X_train, Y_train, epochs=epochs, verbose=1)
Error
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_3' with dtype float and shape [1]
And when I try to give the value to the placeholder like
model.fit([X_train, np.array([3])], Y_train, epochs=epochs, verbose=1)
I get:
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 2 arrays:
How should I initialize this placeholder? My goal is to use model.evaluate to test effect of different values of n the model during inference. Thanks.
I found a solution avoiding the use of an array for n
.
Instead of using a placeholder
, use a K.variable
:
n = K.variable([someInitialValue])
auxInput = Input(tensor=n)
Then you can set the value of n
like this at any time, even after compiling the model:
K.set_value(n,[anotherValue])
This allows you to keep training without having to recompile the model, and without passing n
to the fit
method.
model.fit(X_train,Y_train,....)
If working with many inputs like that, you can make it:
n = K.variable([val1,val2,val3,val4]) #tensor definition
K.set_value(n,[new1,new2,new3,new4]) #changing values
And inside the layers, the second input which is the tensor for n
will have 4 elements:
n1 = inputs[1][0]
n2 = inputs[1][1]
....
You can use Input(shape=(1,))
instead of a placeholder. Also, there's no need to provide input_shape
to Dense
since Input(shape=(5,))
already handles it.
inputTensor = Input(shape=(5,))
out = Dense(units, activation='relu')(inputTensor)
auxInput = Input(shape=(1,))
out = myLayer()([out, auxInput])
Repeat the value n
when feeding it into the model, for example:
n = 3
n_array = np.array([n] * len(X_train))
model.fit([X_train, n_array], Y_train, epochs=1, verbose=1)
What's been described above is just a quick hack. If you want to provide multiple parameters to the layer, you can initialize K.variable
in the constructor __init__()
.
For example,
class myLayer(Layer):
def __init__(self, default_scale=3.0, default_shift=1.0, **kwargs):
self.scale = K.variable(default_scale)
self.shift = K.variable(default_shift)
super(myLayer, self).__init__(**kwargs)
def call(self, inputs, **kwargs):
return K.in_train_phase(inputs, self.scale * inputs + self.shift)
inputTensor = Input(shape=(5,))
out = Dense(units, activation='relu')(inputTensor)
out = myLayer(name='my_layer')(out)
out = Dense(units, activation='relu')(out)
out = Dense(3, activation='softmax')(out)
model = Model(inputs=inputTensor, outputs=out)
By assigning a name to this layer, it'll be easier to get the variables and modify the value in test phase. E.g. , K.set_value(model.get_layer('my_layer').scale, 5)
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With