I'm learning to build a customized sequence-to-sequence model with keras, and have been reading some codes that other people wrote, for example here. I got confused in the call method regarding constants. There is the keras "Note on passing external constants to RNNs", however I'm having trouble to understand what the constants are doing to the model.
I did go through the attention model and the pointer network papers, but maybe I've missed something.
Any reference to the modeling details would be appreciated! Thanks in advance.
Okay just as a reference in case someone else stumbles across this question: I went through the code in the recurrent.py file, I think the get_constants is getting the dropout mask and the recurrent dropout mask, then concatenating it with the [h,c] states (the order of these four elements is required in the LSTM step method). After that it doesn't matter anymore to the original LSTM cell, but you can add your own 'constants' (in the sense that it won't be learned) to pass from one timestep to the next. All constants will be added to the returned [h,c] states implicitly. In Keon's example the fifth position of the returned state is the input sequence, and it can be referenced in every timestep by calling states[-1].
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With