Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TensorFlow: 2 layer feed forward neural net

I'm trying to implement a simple fully-connected feed-forward neural net in TensorFlow (Python 3 version). The network has 2 inputs and 1 output, and I'm trying to train it to output the XOR of the two inputs. My code is as follows:

import numpy as np
import tensorflow as tf

sess = tf.InteractiveSession()

inputs = tf.placeholder(tf.float32, shape = [None, 2])
desired_outputs = tf.placeholder(tf.float32, shape = [None, 1])

weights_1 = tf.Variable(tf.zeros([2, 3]))
biases_1 = tf.Variable(tf.zeros([1, 3]))
layer_1_outputs = tf.nn.sigmoid(tf.matmul(inputs, weights_1) + biases_1)

weights_2 = tf.Variable(tf.zeros([3, 1]))
biases_2 = tf.Variable(tf.zeros([1, 1]))
layer_2_outputs = tf.nn.sigmoid(tf.matmul(layer_1_outputs, weights_2) + biases_2)

error_function = -tf.reduce_sum(desired_outputs * tf.log(layer_2_outputs))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)

sess.run(tf.initialize_all_variables())

training_inputs = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
training_outputs = [[0.0], [1.0], [1.0], [0.0]]

for i in range(10000):
    train_step.run(feed_dict = {inputs: np.array(training_inputs), desired_outputs: np.array(training_outputs)})

print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[0.0, 0.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[0.0, 1.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[1.0, 0.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[1.0, 1.0]])}))

It seems simple enough, but the print statements at the end show that the neural net is nowhere near the desired outputs, regardless of number of training iterations or learning rate. Can anyone see what I am doing wrong?

Thank you.

EDIT: I've also tried the following alternative error function:

error_function = 0.5 * tf.reduce_sum(tf.sub(layer_2_outputs, desired_outputs) * tf.sub(layer_2_outputs, desired_outputs))

That error function is the sum of the squares of the errors. It ALWAYS results in the network outputting a value of exactly 0.5-- another indication of a mistake somewhere in my code.

EDIT 2: I've found that my code works fine for AND and OR, but not for XOR. I'm extremely puzzled now.

like image 564
CircuitScholar Avatar asked Dec 18 '22 14:12

CircuitScholar


1 Answers

There are several issues in your code. In the following I'm going to comment each line to bring you to the solution.

Note: XOR is not linearly separable. You need more than 1 hidden layer.

N.B: The lines that starts with # [!] are the lines where you were wrong.

import numpy as np
import tensorflow as tf

sess = tf.InteractiveSession()

# a batch of inputs of 2 value each
inputs = tf.placeholder(tf.float32, shape=[None, 2])

# a batch of output of 1 value each
desired_outputs = tf.placeholder(tf.float32, shape=[None, 1])

# [!] define the number of hidden units in the first layer
HIDDEN_UNITS = 4 

# connect 2 inputs to 3 hidden units
# [!] Initialize weights with random numbers, to make the network learn
weights_1 = tf.Variable(tf.truncated_normal([2, HIDDEN_UNITS]))

# [!] The biases are single values per hidden unit
biases_1 = tf.Variable(tf.zeros([HIDDEN_UNITS]))

# connect 2 inputs to every hidden unit. Add bias
layer_1_outputs = tf.nn.sigmoid(tf.matmul(inputs, weights_1) + biases_1)

# [!] The XOR problem is that the function is not linearly separable
# [!] A MLP (Multi layer perceptron) can learn to separe non linearly separable points ( you can
# think that it will learn hypercurves, not only hyperplanes)
# [!] Lets' add a new layer and change the layer 2 to output more than 1 value

# connect first hidden units to 2 hidden units in the second hidden layer
weights_2 = tf.Variable(tf.truncated_normal([HIDDEN_UNITS, 2]))
# [!] The same of above
biases_2 = tf.Variable(tf.zeros([2]))

# connect the hidden units to the second hidden layer
layer_2_outputs = tf.nn.sigmoid(
    tf.matmul(layer_1_outputs, weights_2) + biases_2)

# [!] create the new layer
weights_3 = tf.Variable(tf.truncated_normal([2, 1]))
biases_3 = tf.Variable(tf.zeros([1]))

logits = tf.nn.sigmoid(tf.matmul(layer_2_outputs, weights_3) + biases_3)

# [!] The error function chosen is good for a multiclass classification taks, not for a XOR.
error_function = 0.5 * tf.reduce_sum(tf.sub(logits, desired_outputs) * tf.sub(logits, desired_outputs))

train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)

sess.run(tf.initialize_all_variables())

training_inputs = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]

training_outputs = [[0.0], [1.0], [1.0], [0.0]]

for i in range(20000):
    _, loss = sess.run([train_step, error_function],
                       feed_dict={inputs: np.array(training_inputs),
                                  desired_outputs: np.array(training_outputs)})
    print(loss)

print(sess.run(logits, feed_dict={inputs: np.array([[0.0, 0.0]])}))
print(sess.run(logits, feed_dict={inputs: np.array([[0.0, 1.0]])}))
print(sess.run(logits, feed_dict={inputs: np.array([[1.0, 0.0]])}))
print(sess.run(logits, feed_dict={inputs: np.array([[1.0, 1.0]])}))

I increased the number of train iteration to be sure that the network will converge no matter what the random initialization values are.

The output, after 20000 train iteration is:

[[ 0.01759939]]
[[ 0.97418505]]
[[ 0.97734243]]
[[ 0.0310041]]

It looks pretty good.

like image 115
nessuno Avatar answered Dec 21 '22 10:12

nessuno