I have created a stacked keras decoder model using the following loop:
# Create the encoder
# Define an input sequence.
encoder_inputs = keras.layers.Input(shape=(None, num_input_features))
# Create a list of RNN Cells, these are then concatenated into a single layer with the RNN layer.
encoder_cells = []
for hidden_neurons in hparams['encoder_hidden_layers']:
encoder_cells.append(keras.layers.GRUCell(hidden_neurons,
kernel_regularizer=regulariser,
recurrent_regularizer=regulariser,
bias_regularizer=regulariser))
encoder = keras.layers.RNN(encoder_cells, return_state=True)
encoder_outputs_and_states = encoder(encoder_inputs)
# Discard encoder outputs and only keep the states. The outputs are of no interest to us, the encoder's job is to create
# a state describing the input sequence.
encoder_states = encoder_outputs_and_states[1:]
print(encoder_states)
if hparams['encoder_hidden_layers'][-1] != hparams['decoder_hidden_layers'][0]:
encoder_states = Dense(hparams['decoder_hidden_layers'][0])(encoder_states[-1])
# Create the decoder, the decoder input will be set to zero
decoder_inputs = keras.layers.Input(shape=(None, 1))
decoder_cells = []
for hidden_neurons in hparams['decoder_hidden_layers']:
decoder_cells.append(keras.layers.GRUCell(hidden_neurons,
kernel_regularizer=regulariser,
recurrent_regularizer=regulariser,
bias_regularizer=regulariser))
decoder = keras.layers.RNN(decoder_cells, return_sequences=True, return_state=True)
# Set the initial state of the decoder to be the output state of the encoder. his is the fundamental part of the
# encoder-decoder.
decoder_outputs_and_states = decoder(decoder_inputs, initial_state=encoder_states)
# Only select the output of the decoder (not the states)
decoder_outputs = decoder_outputs_and_states[0]
# Apply a dense layer with linear activation to set output to correct dimension and scale (tanh is default activation for
# GRU in Keras
decoder_dense = keras.layers.Dense(num_output_features,
activation='linear',
kernel_regularizer=regulariser,
bias_regularizer=regulariser)
decoder_outputs = decoder_dense(decoder_outputs)
model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_outputs)
model.compile(optimizer=optimiser, loss=loss)
model.summary()
This setup works when I have a single layer encoder and a single layer decoder where the number of neurons is the same. However it does not work when the number of layers of the decoder is more than one.
I get the following error message:
ValueError: An `initial_state` was passed that is not compatible with `cell.state_size`. Received `state_spec`=[InputSpec(shape=(None, 48), ndim=2)]; however `cell.state_size` is (48, 58)
My decoder_layers list contains the entries [48, 58]. Therefore my RNN layer that the decoder is comprised of, is a stacked GRU where the first GRU contains 48 neurons and the second contains 58. I would like to set the initial state of the first GRU. I run the states through a Dense layer so that the shape is compatible with the first layer of the decoder. The error message indicates that I am trying to set the initial state of both the first layer and the second layer when I pass the initial state keyword to the decoder RNN layer. Is this correct behaviour? Normally I would set the initial state of the first decoder layer (not built using a cell structure like this) which then would just feed it's inputs into subsequent layers. Is there a way to achieve such behaviour in keras by default when creating a keras.layers.RNN from a list of GRUCell of LSTMCells?
In my own experiments, your intial_states
should have batch_size as its first dimension. In other words, each element in one batch may have a different initial state. From your code, I think you missed this dimension.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With