Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I train neural network to play the 2048 game?

I want to train neural network to play the 2048 game. I know that NN's aren't a good choice for state games like 2048, but I want to achieve that NN will play the game like an experienced human, i.e. moving tiles only in three directions.

But I can't figure out how to self-train NN since we don't know the valid output. Normally, e.g. in regression, you know the correct output and you can compute the loss (e.g. mean squared error) and update the weights. But in 2048 the valid output is basically unknown (of course you can compute the score for each direction you can move, e.g., the direction with highest difference score_after_move - previous_score would be our valid output, but I think that's not the way to self-learn the NN). So is it possible to define loss function for the 2048 game? The best would be a differentiable one.

Next question is when to update the weights: after each move or rather after a complete game (game over)?

If it's important: my NN topology will be for now simple:

2D matrix of gaming board -> 2D matrix of input neurons -> 2D fully-connected hidden layer -> 1D 4-neuron layer

So each tile will be input to corresponding neuron in the first layer (is there any special name for 2D fully-connected layer?). Expected output from the last layer is a vector of length 4, e.g. [1, 0, 0, 0] will be "up" movement direction.

For now I have implemented the headless class (in Python/NumPy) for the 2048 game, because using the visual input is slow and also more work to do.

P.S. Maybe I am thinking incorrectly about NN learning for this game (or games generally). Feel free to show me a better way, I would appreciate that. Thanks :)

EDIT: Reinforcement learning seems to be the way. Here are some useful links:

Demystifying Deep Reinforcement Learning

Action-Value Methods and n-armed bandit problems

Q-learning for Keras

Deep Reinforcement Learning for Keras

like image 386
jirinovo Avatar asked Nov 04 '16 18:11

jirinovo


1 Answers

So https://github.com/matthiasplappert/keras-rl seems to be the best way. You must only implement few methods defined by OpenAI Gym environment API. These are step() and reset() methods: https://github.com/matthiasplappert/keras-rl/blob/master/rl/core.py#L330

For more info there is an answer from keras-rl developer: https://github.com/matthiasplappert/keras-rl/issues/38

When my 2048 game AI project will be finished, I send the link to source here (if I won't forget to do so :) )

EDIT: here is the promised link to source, totally forgot about it :/ https://github.com/gorgitko/MI-MVI_2016

like image 90
jirinovo Avatar answered Sep 21 '22 06:09

jirinovo