Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to make virtual organisms learn using neural networks? [closed]

I'm making a simple learning simulation, where there are multiple organisms on screen. They're supposed to learn how to eat, using their simple neural networks. They have 4 neurons, and each neuron activates movement in one direction (it's a 2D plane viewed from the bird's perspective, so there are only four directions, thus, four outputs are required). Their only input are four "eyes". Only one eye can be active at the time, and it basically serves as a pointer to the nearest object (either a green food block, or another organism).

Thus, the network can be imagined like this: enter image description here

And an organism looks like this (both in theory and the actual simulation, where they really are red blocks with their eyes around them):

enter image description here

And this is how it all looks (this is an old version, where eyes still didn't work, but it's similar):

enter image description here

Now that I have described my general idea, let me get to the heart of the problem...

  1. Initialization| First, I create some organisms and food. Then, all the 16 weights in their neural networks are set to random values, like this: weight = random.random()*threshold*2. Threshold is a global value that describes how much input each neuron needs to get in order to activate ("fire"). It is usually set to 1.

  2. Learning| By default, the weights in the neural networks are lowered by 1% each step. But, if some organism actually manages to eat something, the connection between the last active input and output is strengthened.

But, there is a big problem. I think that this isn't a good approach, because they don't actually learn anything! Only those that had their initial weights randomly set to be beneficial will get a chance of eating something, and then only them will have their weights strengthened! What about those that had their connections set up badly? They'll just die, not learn.

How do I avoid this? The only solution that comes to mind is to randomly increase/decrease the weights, so that eventually, someone will get the right configuration, and eat something by chance. But I find this solution to be very crude and ugly. Do you have any ideas?

EDIT: Thank you for your answers! Every single one of them was very useful, some were just more relevant. I have decided to use the following approach:

  1. Set all the weights to random numbers.
  2. Decrease the weights over time.
  3. Sometimes randomly increase or decrease a weight. The more successful the unit is, the less it's weights will get changed. NEW
  4. When an organism eats something, increase the weight between the corresponding input and the output.
like image 874
corazza Avatar asked Jan 25 '12 17:01

corazza


People also ask

What are different types of learning methods are used in neural network?

Learning, in artificial neural network, is the method of modifying the weights of connections between the neurons of a specified network. Learning in ANN can be classified into three categories namely supervised learning, unsupervised learning, and reinforcement learning.

How do artificial neurons learn?

Every neuron has input connections and output connections. These connections simulate the behavior of the synapses in the brain. The same way that synapses in the brain transfer the signal from one neuron to another, connections pass information between artificial neurons.

Can neural networks learn by themselves?

Neural networks generally perform supervised learning tasks, building knowledge from data sets where the right answer is provided in advance. The networks then learn by tuning themselves to find the right answer on their own, increasing the accuracy of their predictions.

Can a neural network learn anything?

Just like every other supervised machine learning model, neural networks learn relationships between input variables and output variables. In fact, we can even see how it's related to the most iconic model of all, linear regression.


1 Answers

This is similar to issues with trying to find a global minimum, where it's easy to get stuck in a local minimum. Consider trying to find the global minimum for the profile below: you place the ball in different places and follow it as it rolls down the hill to the minimum, but depending on where you place it, you may get stuck in a local dip.enter image description here

That is, in complicated situations, you can't always get to the best solution from all starting points using small optimizing increments. The general solutions to this are to fluctuate the parameters (i.e., weights, in this case) more vigorously (and usually reduce the size of the fluctuations as you progress the simulation -- like in simulated annealing), or just realize that a bunch of the starting points aren't going to go anywhere interesting.

like image 135
tom10 Avatar answered Oct 03 '22 16:10

tom10