I'm training an LSTM model using as input a sequence of 50 steps of 3 different features laid out as below:
#x_train
[[[a0,b0,c0],.....[a49,b49,c49]],
[a1,b1,c1]......[a50,b50,c50]],
...
[a49,b49,c49]...[a99,b99,c99]]]
Using the following dependent variable
#y_train
[a50, a51, a52, ... a99]
The code below works to predict just a, how do I get it to predict and return a vector of [a,b,c] at a given timestep?
def build_model():
model = Sequential()
model.add(LSTM(
input_shape=(50,3),
return_sequences=True, units=50))
model.add(Dropout(0.2))
model.add(LSTM(
250,
return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(1))
model.add(Activation("linear"))
model.compile(loss="mse", optimizer="rmsprop")
return model
An LSTM cell in Keras gives you three outputs: an output state o_t (1st output) a hidden state h_t (2nd output) a cell state c_t (3rd output)
Neural network models can be configured for multi-output regression tasks.
LSTM stands for Long short-term memory. LSTM cells are used in recurrent neural networks that learn to predict the future from sequences of variable lengths. Note that recurrent neural networks work with any kind of sequential data and, unlike ARIMA and Prophet, are not restricted to time series.
Some regression machine learning algorithms support multiple outputs directly. This includes most of the popular machine learning algorithms implemented in the scikit-learn library, such as: LinearRegression (and related) KNeighborsRegressor.
The output of every layer is based on how many cells/units/filters it has.
Your output has 1 feature because Dense(1...)
has only one cell.
Just making it a Dense(3...)
would solve your problem.
Now, if you want the output to have the same number of time steps as the input, then you need to turn on return_sequences = True
in all your LSTM layers.
The output of an LSTM is:
return_sequences=False
return_sequences=True
Then you use a TimeDistributed
layer wrapper in your following layers to work as if they also had time steps (it will basically preserve the dimension in the middle).
def build_model():
model = Sequential()
model.add(LSTM(
input_shape=(50,3),
return_sequences=True, units=50))
model.add(Dropout(0.2))
model.add(LSTM(
250,
return_sequences=True))
model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(3)))
model.add(Activation("linear"))
model.compile(loss="mse", optimizer="rmsprop")
return model
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With