I'm trying to implement an AI for a game of 'continuous snake'. It's very different from a normal snake game, at least as far as the AI is concerned. Basically, the snake drives a bit like a car and the first one of the 2 players to crash into his trail or the other's trail loses the game. Also the screen wraps around its borders.
You can understand it better if you look at a video of my current progress: https://www.youtube.com/watch?v=i9qU-r4COQ8
It's not too bad, but it still can't beat me (I'm yellow). A winning AI would ideally need to exhibit these behaviors:
My current approach uses the NEAT algorithm (http://www.cs.ucf.edu/~kstanley/neat.html). It's a genetic algorithm that evolves neural networks over generations. It learned how to do 1,2 and 3 to some extent (but not great) but has no idea about 4.
For the inputs, I'm using:
I'm a bit stuck now though and would like to know:
I'm happy to make my code available if someone wants to see it (C#).
Thanks!
Neural networks can be used in a variety of different ways in computer games. They can be used to control one game agent, several game agents, or several neural networks can be used to control a single game agent. A game agent can be a non-player character or it can be used to represent the game environment.
Reinforcement learning is used heavily in the field of machine learning and can be seen in methods such as Q-learning, policy search, Deep Q-networks and others. It has seen strong performance in both the field of games and robotics.
They can be used to model complex relationships between inputs and outputs or to find patterns in data. Using neural networks as a tool, data warehousing firms are harvesting information from datasets in the process known as data mining.”
Multilayer Perceptron. Convolutional Neural Network. Radial Basis Functional Neural Network. Recurrent Neural Network.
The main problem here is that your learning algorithm doesn't have enough information (unless you are using the recurrency feature). Basically, each frame you're asking it to navigate a maze by a few distance sensors - impossible.
What singhV said before is true - for good results, the input for the learning algorithm has to be the image (as well as your own head position and angle). You can lower the resolution a bit and convert to monochrome for efficiency.
As for your questions: * Recurrent networks are networks that can remember previous state, and use it like "memory" basically. Its not what you need for this task (unless you really want to keep the inputs as they are, but then the snake would have to learn "remembering" all it saw - it would be pretty impressive but too hard) * Unsupervised: it means that there are no "examples" to learn from, instead learning by positive / negative feedback (losing = bad). Your network is unsupervised. * Real Time / Continuous - no idea, i didnt find anything except some 2007 microsoft research: https://www.microsoft.com/en-us/research/publication/continuous-neural-networks/
By the way, NEAT is pretty neat, I'm glad I ran into this!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With