Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Policy Gradients in Keras

Tags:

I've been trying to build a model using 'Deep Q-Learning' where I have a large number of actions (2908). After some limited success with using standard DQN: (https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf), I decided to do some more research because I figured the action space was too large to do effective exploration.

I then discovered this paper: https://arxiv.org/pdf/1512.07679.pdf where they use an actor-critic model and policy gradients, which then led me to: https://arxiv.org/pdf/1602.01783.pdf where they use policy gradients to get much better results then DQN overall.

I've found a few sites where they have implemented policy gradients in Keras, https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html and https://oshearesearch.com/index.php/2016/06/14/kerlym-a-deep-reinforcement-learning-toolbox-in-keras/ however I'm confused how they are implemented. In the former (and when I read the papers) it seems like instead of providing an input and output pair for the actor network, you provide the gradients for the all the weights and then use the network to update it, whereas, in the latter they just calculate an input-output pair.

Have I just confused myself? Am I just supposed to be training the network by providing an input-output pair and use the standard 'fit', or do I have to do something special? If it's the latter, how do I do it with the Theano backend? (the examples above use TensorFlow).

like image 265
simeon Avatar asked Nov 05 '16 12:11

simeon


People also ask

What is policy gradient?

Policy gradient methods are a type of reinforcement learning techniques that rely upon optimizing parametrized policies with respect to the expected return (long-term cumulative reward) by gradient descent.

Is DDPG a policy gradient?

Deep Deterministic Policy Gradient (DDPG) is a reinforcement learning technique that combines both Q-learning and Policy gradients. DDPG being an actor-critic technique consists of two models: Actor and Critic.

Is Dqn policy gradient?

Deep-Q-learning is a value based method while Policy Gradient is a policy based method. It can learn the stochastic policy ( outputs the probabilities for every action ) which is useful for handling the exploration/exploitation trade off. Often π is simpler than V or Q.

What is deterministic policy gradient?

Deep Deterministic Policy Gradient (DDPG) is an algorithm which concurrently learns a Q-function and a policy. It uses off-policy data and the Bellman equation to learn the Q-function, and uses the Q-function to learn the policy.


2 Answers

TL;DR

  1. Learn how to implement custom loss functions and gradients using Keras.backend. You will need it for more advanced algorithms and it's actually much easier once you get the hang of it
  2. One CartPole example of using keras.backend could be https://gist.github.com/kkweon/c8d1caabaf7b43317bc8825c226045d2 (though its backend used Tensorflow but it should be very similar if not the same)

Problem

When playing,

the agent needs a policy that is basically a function that maps a state into a policy that is a probability for each action. So, the agent will choose an action according to its policy.

i.e, policy = f(state)

When training,

Policy Gradient does not have a loss function. Instead, it tries to maximize the expected return of rewards. And, we need to compute the gradients of log(action_prob) * advantage

  1. advantage is a function of rewards.
    • advantage = f(rewards)
  2. action_prob is a function of states and action_taken. For example, we need to know which action we took so that we can update parameters to increase/decrease a probability for the action we took.
    • action_prob = sum(policy * action_onehot) = f(states, action_taken)

I'm assuming something like this

  • policy = [0.1, 0.9]
  • action_onehot = action_taken = [0, 1]
  • then action_prob = sum(policy * action_onehot) = 0.9

Summary

We need two functions

  • update function: f(state, action_taken, reward)
  • choose action function: f(state)

You already know it's not easy to implement like typical classification problems where you can just model.compile(...) -> model.fit(X, y)

However,

  • In order to fully utilize Keras, you should be comfortable with defining custom loss functions and gradients. This is basically the same approach the author of the former one took.

  • You should read more documentations of Keras functional API and keras.backend

Plus, there are many many kinds of policy gradients.

  • The former one is called DDPG which is actually quite different from regular policy gradients
  • The latter one I see is a traditional REINFORCE policy gradient (pg.py) which is based on Kapathy's policy gradient example. But it's very simple for example it only assumes only one action. That's why it could have been implemented somehow using model.fit(...) instead.

References

  • Schulman, "Policy Gradient Methods", http://rll.berkeley.edu/deeprlcourse/docs/lec2.pdf
like image 89
Mo... Avatar answered Sep 19 '22 09:09

Mo...


The seemingly conflicting implementations you encountered are both valid implementations. They are two equivalent ways two implement the policy gradients.

  1. In the vanilla implementation, you calculate the gradients of the policy network w.r.t. rewards and directly update the weights in the direction of the gradients. This would require you to do the steps described by Mo K.

  2. The second option is arguably a more convenient implementation for autodiff frameworks like keras/tensorflow. The idea is to implement an input-output (state-action) function like supervised learning, but with a loss function who's gradient is identical to the policy gradient. For a softmax policy, this simply means predicting the 'true action' and multiplying the (cross-entropy) loss with the observed returns/advantage. Aleksis Pirinen has some useful notes about this [1].

The modified loss function for option 2 in Keras looks like this:

import keras.backend as K  def policy_gradient_loss(Returns):     def modified_crossentropy(action,action_probs):         cost = K.categorical_crossentropy(action,action_probs,from_logits=False,axis=1 * Returns)         return K.mean(cost)     return modified_crossentropy 

where 'action' is the true action of the episode (y), action_probs is the predicted probability (y*). This is based on another stackoverflow question [2].

References

  1. https://aleksispi.github.io/assets/pg_autodiff.pdf
  2. Make a custom loss function in keras
like image 28
Arjan Groen Avatar answered Sep 22 '22 09:09

Arjan Groen