After running my model for one epoch it crashed with following error message:
InvalidArgumentError: Specified a list with shape [60,9] from a tensor with shape [56,9] [[{{node TensorArrayUnstack/TensorListFromTensor}}]] [[sequential_7/lstm_17/PartitionedCall]] [Op:__inference_train_function_29986]
This happened after I changed the LSTM Layer to stateful=True
and had to pass
the batch_input_shape
Argument instead of the input_shape
Below is my code, I'm sure it has something to do with the shape of my data:
test_split = 0.2
history_points = 60
n = int(histories.shape[0] * test_split)
histories_train = histories[:n]
y_train = next_values_normalized[:n]
histories_test = histories[n:]
y_test = next_values_normalized[n:]
next_values_test = next_values[n:]
print(histories_train.shape)
print(y_train.shape)
-->(1421, 60, 9)
-->(1421, 1)
# model architecture
´´´model = Sequential()
model.add(LSTM(units=128, stateful=True,return_sequences=True, batch_input_shape=(60,history_points, 9)))
model.add(LSTM(units=64,stateful=True,return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=32))
model.add(Dropout(0.2))
model.add(Dense(20))
ADAM=keras.optimizers.Adam(0.0005, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(loss='mean_squared_error', optimizer=ADAM)
model.fit(x=histories_train, y=y_train, batch_size=batchsize, epochs=50, shuffle=False, validation_split=0.2,verbose=1)
´´´
For a stateful LSTM, the batch size should be chosen in a way, so that the number of samples is divisible by the batch size. See also here:
Keras: What if the size of data is not divisible by batch_size?
In your case, considering that you take 20% from your training data as a validation set, you have 1136 samples remaining. So you should choose a batch size by which 1136 is divisible.
Additionally, you could for example remove some samples or reuse samples to be able to choose various batch sizes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With