I am using Echo State Networks(ESN) as a Q-function in a Reinforcement Learning task. I have managed to achieve high accuracy, 90% in average, on the test phase with particular reservoir topology (spectral radius =0.9, regularization coefficient = 10, #input unit = 2, #output units = 1, #reservoir units = 8, and no leaking rate).
The system achieved high accuracy in test phase after training for 100 episodes. But when I initialized the networks weights with different random seeds, it's behavior became very unstable and failed to achieve high performance as before. I want to know how can I overcome this randomness issue and have ESN that is robust to the different random initialization of its input and reservoir weight and can generalized well ?
here how I initialized my network. Input weights and reservoir weights sampled from Normal Distribution (mean = 0 and std = 1). Input weight matrix is normalized with unified variance and the reservoir weights normalized by division with maximum of absolute Eigen values and multiplied with spectral radius.
Thanks In Advance
Ramin
I tend to agree with your comment: your reservoir needs more neurons in order to increase the probability of capturing the right dynamics. However, regarding your second question, I'd say the principle is not very much different from conventional feedforward NNs, you will need empirical parameter searching. More specifically, for ESNs, I do following:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With