Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Implementing the TD-Gammon algorithm

I am attempting to implement the algorithm from the TD-Gammon article by Gerald Tesauro. The core of the learning algorithm is described in the following paragraph:

enter image description here

I have decided to have a single hidden layer (if that was enough to play world-class backgammon in the early 1990's, then it's enough for me). I am pretty certain that everything except the train() function is correct (they are easier to test), but I have no idea whether I have implemented this final algorithm correctly.

import numpy as np

class TD_network:
    """
    Neural network with a single hidden layer and a Temporal Displacement training algorithm
    taken from G. Tesauro's 1995 TD-Gammon article.
    """
    def __init__(self, num_input, num_hidden, num_output, hnorm, dhnorm, onorm, donorm):
        self.w21 = 2*np.random.rand(num_hidden, num_input) - 1
        self.w32 = 2*np.random.rand(num_output, num_hidden) - 1
        self.b2 = 2*np.random.rand(num_hidden) - 1
        self.b3 = 2*np.random.rand(num_output) - 1
        self.hnorm = hnorm
        self.dhnorm = dhnorm
        self.onorm = onorm
        self.donorm = donorm

    def value(self, input):
        """Evaluates the NN output"""
        assert(input.shape == self.w21[1,:].shape)
        h = self.w21.dot(input) + self.b2
        hn = self.hnorm(h)
        o = self.w32.dot(hn) + self.b3
        return(self.onorm(o))

    def gradient(self, input):
        """
        Calculates the gradient of the NN at the given input. Outputs a list of dictionaries
        where each dict corresponds to the gradient of an output node, and each element in
        a given dict gives the gradient for a subset of the weights. 
        """ 
        assert(input.shape == self.w21[1,:].shape)
        J = []
        h = self.w21.dot(input) + self.b2
        hn = self.hnorm(h)
        o = self.w32.dot(hn) + self.b3

        for i in range(len(self.b3)):
            db3 = np.zeros(self.b3.shape)
            db3[i] = self.donorm(o[i])

            dw32 = np.zeros(self.w32.shape)
            dw32[i, :] = self.donorm(o[i])*hn

            db2 = np.multiply(self.dhnorm(h), self.w32[i,:])*self.donorm(o[i])
            dw21 = np.transpose(np.outer(input, db2))

            J.append(dict(db3 = db3, dw32 = dw32, db2 = db2, dw21 = dw21))
        return(J)

    def train(self, input_states, end_result, a = 0.1, l = 0.7):
        """
        Trains the network using a single series of input states representing a game from beginning
        to end, and a final (supervised / desired) output for the end state
        """
        outputs = [self(input_state) for input_state in input_states]
        outputs.append(end_result)
        for t in range(len(input_states)):
            delta = dict(
                db3 = np.zeros(self.b3.shape),
                dw32 = np.zeros(self.w32.shape),
                db2 = np.zeros(self.b2.shape),
                dw21 = np.zeros(self.w21.shape))
            grad = self.gradient(input_states[t])
            for i in range(len(self.b3)):
                for key in delta.keys():
                    td_sum = sum([l**(t-k)*grad[i][key] for k in range(t + 1)])
                    delta[key] += a*(outputs[t + 1][i] - outputs[t][i])*td_sum
            self.w21 += delta["dw21"]
            self.w32 += delta["dw32"]
            self.b2 += delta["db2"]
            self.b3 += delta["db3"]

The way I use this is I play through a whole game (or rather, the neural net plays against itself), and then I send the states of that game, from start to finish, into train(), along with the final result. It then takes this game log, and applies the above formula to alter weights using the first game state, then the first and second game states, and so on until the final time, when it uses the entire list of game states. Then I repeat that many times and hope that the network learns.

To be clear, I am not after feedback on my code writing. This was never meant to be more than a quick and dirty implementation to see that I have all the nuts and bolts in the right spots.

However, I have no idea whether it is correct, as I have thus far been unable to make it capable of playing tic-tac-toe at any reasonable level. There could be many reasons for that. Maybe I'm not giving it enough hidden nodes (I have used 10 to 12). Maybe it needs more games to train (I have used 200 000). Maybe it would do better with different normalisation functions (I've tried sigmoid and ReLU, leaky and non-leaky, in different variations). Maybe the learning parameters are not tuned right. Maybe tic-tac-toe and its deterministic gameplay means it "locks in" on certain paths in the game tree. Or maybe the training implementation is just wrong. Which is why I'm here.

Have I misunderstood Tesauro's algorithm?

like image 333
Arthur Avatar asked Aug 08 '19 20:08

Arthur


1 Answers

I can't say that I entirely understand your implementation, but this line jumps out to me:

                    td_sum = sum([l**(t-k)*grad[i][key] for k in range(t + 1)])

Comparing with the formula you reference:

I see at least two differences:

  • Your implementation sums over t+1 elements compared to t elements in the formula
  • The gradient should be indexed with the same k as used in l**(t-k), but in your implementation it is indexed with i and key, without any reference to k

Perhaps if you fix these discrepancies your solution will behave more as expected.

like image 138
Seb Avatar answered Oct 22 '22 17:10

Seb