I am looking at panel data, which is structured like this:
D = \{(x^{(k)}_{t},y^{(k)}_{t})\,|\, k=1,\dots,N\, , t=t_0,\dots,t_k \}_{k=1}^{N}
where x^{(k)}
denotes the k
'th sequence, x^{(k)}_{t}
denotes the k
'th sequences value at time t
, furthermore x^{(k)}_{i,t}
is the i
'th entry in the vector x^{(k)}_{t}
. That is x^{(k)}_{t}
is the feature vector of the k
'th sequence at time t
. The sub- and super scripts mean the same things for the label data y^{(k)}_{t}
, but here y^{(k)}_{t} \in \{0,1\}
.
In plain words: The data set contains individuals observed over time, and for each time point at which an individual is observed, it is recorded whether he bought an item or not ( y\in \{0,1\}
).
I would like to use a recurrent neural network with LSTM units from Keras for the task of predicting whether a person will buy an item or not, at a given time point. I have only been able to find examples of RNN's where each sequence has a label value (philipperemy link), not an example where each sequence element has a label value as in the problem I described.
My approach so far, has been to create a tensor with dimensions (samples,timesteps,features) but I cannot figure out how to format the labels, such that keras
can match them with the features. It should be something like this (samples,timesteps,1), where the last dimension indicates a single dimension to contain the label value of 0 or 1.
Furthermore some of the approaches that I have come across splits sequences such that subsequences are add to the training data, thus increasing the need for memory tremendously (mlmastery link). This is infeasible in my case, as I have multiple GB's of data, and I would not be able to store it in memory if I added subsequences.
The model I would like to use is something like this:
mod = Sequential()
mod.add(LSTM(30,input_dim=116, return_sequences = True))
mod.add(LSTM(10))
mod.add(Dense(2))
Does anyone have experience working with panel data in keras
?
In this blog post we'd like to show how Long Short Term Memories (LSTM) based RNNs can be used for multivariate time series forecasting by way of a bike sharing case study where we predict the demand for bikes based on multiple input features.
Panel data (also known as longitudinal or cross- sectional time-series data) is a dataset in which the behavior of entities are observed across time. These entities could be states, companies, individuals, countries, etc.
Bidirectional long-short term memory(bi-lstm) is the process of making any neural network o have the sequence information in both directions backwards (future to past) or forward(past to future). In bidirectional, our input flows in two directions, making a bi-lstm different from the regular LSTM.
Using LSTM, time series forecasting models can predict future values based on previous, sequential data. This provides greater accuracy for demand forecasters which results in better decision making for the business.
Try:
mod = Sequential()
mod.add(LSTM(30, input_shape=(timesteps, features), return_sequences = True))
mod.add(LSTM(10, return_sequences = True))
mod.add(TimeDistributed(Dense(1, activation='sigmoid')))
# In newest Keras version you can change the line above to mod.add(Dense(1, ..))
mod.compile(loss='binary_crossentropy', optimizer='rmsprop')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With