Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

tflearn / tensorflow does not learn xor

Following code was written to learn the XOR function, but about half of the time the network does not learn and the loss after each epoch stays the same.

train_f = [[0, 0], [0, 1], [1, 0], [1, 1]]
train_c = [[0], [1], [1], [0]]
test_f = train_f
test_c = train_c

import tensorflow as tf
import tflearn

X = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]
Y_xor = [[0.], [1.], [1.], [0.]]

# Graph definition
with tf.Graph().as_default():
    # Building a network with 2 optimizers
    net = tflearn.input_data(shape=[None, 2])
    # Nand operator definition
    net = tflearn.fully_connected(net, 2, activation='relu')
    net = tflearn.fully_connected(net, 2, activation='relu')
    net = tflearn.fully_connected(net, 1, activation='sigmoid')
    regressor = tflearn.regression(net, optimizer='adam', learning_rate=0.005, loss="mean_square",)

    # Training
    m = tflearn.DNN(regressor)
    m.fit(X, Y_xor, n_epoch=256, snapshot_epoch=False)

    # Testing
    print("Testing XOR operator")
    print("0 xor 0:", m.predict([[0., 0.]]))
    print("0 xor 1:", m.predict([[0., 1.]]))
    print("1 xor 0:", m.predict([[1., 0.]]))
    print("1 xor 1:", m.predict([[1., 1.]]))

Sometimes I get correct results like this:

Testing XOR operator
0 xor 0: [[0.1487255096435547]]
0 xor 1: [[0.9297153949737549]]
1 xor 0: [[0.9354135394096375]]
1 xor 1: [[0.1487255096435547]]

But often this:

Testing XOR operator
0 xor 0: [[0.4999997615814209]]
0 xor 1: [[0.5000002384185791]]
1 xor 0: [[0.4999997615814209]]
1 xor 1: [[0.5000001788139343]]

My 2x2x1 network should be able to perform XOR, and there is even some evidence that suggests that this network should always converge http://www.ncbi.nlm.nih.gov/pubmed/12662805

I have also tried to change the relu layers to sigmoid, to perform 2048 iterations, and to make a 4x4x1 and 6x6x1 networks, but the same problem still occurs sometimes.

Could there be something wrong with how the weights are initialized? How do I use tflearn to have a neural net learn the xor function?

like image 555
rdezbolcom Avatar asked May 11 '16 14:05

rdezbolcom


People also ask

What is tflearn in TensorFlow?

TFLearn functions can be used independently as well since all functions are built upon tensors. With the use of powerful helper functions, any TensorFlow graph can be trained easily that accepts multiple inputs, outputs, and optimizers, etc.

How to implement XOR cipher in TensorFlow?

The concept of implementation with XOR Cipher is to define a XOR encryption key and then perform XOR operation of the characters in the specified string with this key, which a user tries to encrypt. Now we will focus on XOR implementation using TensorFlow, which is mentioned below −

What is tflearn in deep learning?

TFLearn can be defined as a modular and transparent deep learning aspect used in TensorFlow framework. The main motive of TFLearn is to provide a higher level API to TensorFlow for facilitating and showing up new experiments.

How accurate are TensorFlow models?

After running model.fit (), Tensorflow will feed the input data 5000 times and try to fit the model. If your output looks something like this (aim for small loss and high accuracy), your model is 0.999% accurate meaning that it successfully learned to solve the problem.


1 Answers

The network with relus (as it is written in the code snippet) is expected to often fail to train. The reason for that is that if the input to relu is less than zero, the output is zero, and therefore the gradient going back is also zero.

Since you have two layers, each having only two relu units, with random initialization each of these two layers has 25% of having all its neurons returning zero, and therefore having zero gradient going back => neural network will not learn at all. In such a network the output of the last layer (before the final sigmoid) will be zero, sigmoid of which is 0.5 -- exactly what you observe on the attempts on which your network didn't converge.

Since each layer has 25% chance of doing this damage, the entire network has a total chance of around 45% (1 - (1 - 0.25)^2) of failing to train from the get go. There's also a non-zero chance that the network is not in such a state at the beginning, but happens to bring itself to such a state during training, further increasing the chance of divergence.

With four neurons the chance will be significantly lower, but still not zero.

Now, the only thing I cannot answer is why your network doesn't converge when you replace relu with sigmoid -- such a network should be always able to learn "xor". My only hypothesis is that you replaced only one relu with sigmoid, not both of them.

Can you replace both relus with sigmoids and confirm you still observe divergence?

like image 59
Ishamael Avatar answered Sep 19 '22 22:09

Ishamael