I'm learning neural networks through Keras and would like to explore my sequential dataset on a recurrent neural network. I was reading the docs and trying to make sense of the LSTM example.
My questions are:
timesteps
that are required for both layers?Dense
as an input for those recurrent layers?Embedding
layer do?It is used when we first want to connect it to the Flatten and then to the Dense layers upstream as it helps to compute the output shape of the dense layer. If the recurrent layer is not the initial layer in the model, then you will have to specify the length of the input at the level of the first layer via input_shape.
Generally, 2 layers have shown to be enough to detect more complex features. More layers can be better but also harder to train. As a general rule of thumb — 1 hidden layer work with simple problems, like this, and two are enough to find reasonably complex features.
RNNs are generalization of the one-hidden-layer RNN discussed above. Figure 9 shows a vanilla RNN with two hidden layers.
Timesteps are a pretty bothering thing about Keras. Due to the fact that data you provide as an input to your LSTM must be a numpy array it is needed (at least for Keras version <= 0.3.3) to have a specified shape of data - even with a "time" dimension. You can only put a sequences which have a specified length as an input - and in case your inputs vary in a length - you should use either an artificial data to "fill" your sequences or use a "stateful" mode (please read carefully Keras documentation to understand what this approach means). Both solutions might be unpleasent - but it's a cost you pay that Keras is so simple :) I hope that in version 1.0.0 they will do something with that.
There are two ways to apply norecurrent layers after LSTM ones:
https://stats.stackexchange.com/questions/182775/what-is-an-embedding-layer-in-a-neural-network :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With