Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Giving a neural network "pain"

I've programmed a non-directional neural network. So kind of like the brain, all neurons are updated at the same time, and there are no explicit layers.

Now I'm wondering, how does pain work? How can I structure a neural network so that a "pain" signal will make it want to do anything to get rid of said pain.

like image 555
Hannesh Avatar asked Feb 19 '11 21:02

Hannesh


2 Answers

It doesn't really work quite like that. The network you have described is too simple to have a concept like pain that it would try to get rid of. On a low level it's nothing but just another input, but obviously that doesn't make the network "dislike" it.

In order to gain such a signal, you could train the network to perform certain actions when it receives this particular signal. As it becomes more refined, this signal starts looking like a real pain signal, but it's nothing more than a specific training of the network.

The pain signal in higher animals has this "do anything to get rid of it" response because higher animals have rather advanced cognitive abilities compared to the network you have described. Worms, on the other hand, might respond in a very specific way to a "pain" input - twitch a certain way. It's hard-wired that way, and to say that the worm tries to do anything to get rid of the signal would be wrong; it's more like a motor connected to a button that spins every time you press the button.

Realistic mechanisms for getting artificial neural networks to do useful things are collectively known as "neural network training", and is a large and complex research area. You can google for this phrase to get various ideas.

You should be aware, however, that neural networks are not a panacea for solving hard problems; they don't automatically get things done through magic. Using them effectively requires a good deal of experimentation with traning algorithm tweaks and network parameter tweaks.

like image 81
Roman Starkov Avatar answered Nov 16 '22 11:11

Roman Starkov


I don't know much (if anything) about AI theory, except that we are still looking for a way to give AI the model it needs to reason and think and ponder like real humans do. (We're still looking for the key - and maybe it's pain.)

Most of my adult life has been focused on computer programming and studying and understanding the mind.

I am writing here because I think that PAIN might be the missing link. (Also stackoverflow rocks right now.) I know that creating a model that actually enables higher thinking is a large leap, but I just had this amazing aha-type moment and had to share it. :)

In my studies of Buddhism, I learned of a scientist who studied leprosy cases. The reason lepers become deformed is because they don't feel pain when they come into contact with damaging forces. It's here that science and Buddhist reasoning collide in a fundamental truth.

Pain is what keeps us alive, defines our boundaries, and shapes how we make our choices and our world-view.

In an AI model, the principle would be to define a series of forces perhaps, that are constantly at play. The idea is to keep the mind alive.

The concept of ideas having life is something we humans also seem to play out. When someone "kills" your idea, by proving it wrong, at first, there is a resistance to the "death" of the idea. In fact, it takes a lot sometimes, to force an idea to be changed. We all know stubborn people... It has been said that the "death" of an idea, is the "death" of part of one's ego. The ego is always trying to build itself up.

So you see, to give AI an ego, you must give it pain, and then it will have to fight to build "safe" thoughts so that it may grow it's own ideas and eventually human psychosis and "consciousness".

like image 45
grigb Avatar answered Nov 16 '22 11:11

grigb