I was trying to train a LSTM model using keras but I think I got something wrong here.
I got an error of
ValueError: Error when checking input: expected lstm_17_input to have 3 dimensions, but got array with shape (10000, 0, 20)
while my code looks like
model = Sequential()
model.add(LSTM(256, activation="relu", dropout=0.25, recurrent_dropout=0.25, input_shape=(None, 20, 64)))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, y_train,
batch_size=batch_size,
epochs=10)
where X_train
has a shape of (10000, 20)
and the first few data points are like
array([[ 0, 0, 0, ..., 40, 40, 9],
[ 0, 0, 0, ..., 33, 20, 51],
[ 0, 0, 0, ..., 54, 54, 50],
...
and y_train
has a shape of (10000, )
, which is a binary (0/1) label array.
Could someone point out where I was wrong here?
For the sake of completeness, here's what's happened.
First up, LSTM
, like all layers in Keras, accepts two arguments: input_shape
and batch_input_shape
. The difference is in convention that input_shape
does not contain the batch size, while batch_input_shape
is the full input shape including the batch size.
Hence, the specification input_shape=(None, 20, 64)
tells keras to expect a 4-dimensional input, which is not what you want. The correct would have been just (20,)
.
But that's not all. LSTM layer is a recurrent layer, hence it expects a 3-dimensional input (batch_size, timesteps, input_dim)
. That's why the correct specification is input_shape=(20, 1)
or batch_input_shape=(10000, 20, 1)
. Plus, your training array should also be reshaped to denote that it has 20
time steps and 1
input feature per each step.
Hence, the solution:
X_train = np.expand_dims(X_train, 2) # makes it (10000,20,1)
...
model = Sequential()
model.add(LSTM(..., input_shape=(20, 1)))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With