I am experimenting with the Q-learning algorithm. I have read from different sources and understood the algorithm, however, there seem to be no clear convergence criteria that is mathematically backed.
Most sources recommend iterating several times (example, N = 1000), while others say convergence is achieved when all state and action pairs (s, a) are visited infinitely often. But the question here is, how much is infinitely often. What is the best criteria for someone who wants to solve the algorithm by hand?
I would be grateful if someone could educate me on this. I would also appreciate any articles to this effect.
Regards.
Q-Learning was a major breakthrough in reinforcement learning precisely because it was the first algorithm with guaranteed convergence to the optimal policy. It was originally proposed in (Watkins, 1989) and its convergence proof was refined in (Watkins & Dayan, 1992).
In short, two conditions must be met to guarantee convergence in the limit, meaning that the policy will become arbitrarily close to the optimal policy after an arbitrarily long period of time. Note that these conditions say nothing about how fast the policy will approach the optimal policy.
1/1, 1/2, 1/3, 1/4, ...
π(s, a) > 0
for all (s, a)
. In practice, using an ε-greedy policy (where ε > 0
) ensures that this condition is satisfied.Any RL algorithm converges when the learning curve gets flat and no longer increases. However, for each case, specific elements should be considered as it depends on your algorithm's and your problem's specifications.
In theory, it has been proven that Q-Learning converges towards the optimal solution but It is usually not obvious how to tune the hyperparameters like 𝜀 and 𝛼 in a way that convergence is insured.
Keep in mind that Q-learning is an old algorithm and kind of out-dated,it is a good way to learn about RL but there are better ways to solve a real-life problem.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With