I am new to TensorFlow and need to implement a deep neural network for a regression task. I assume there are no such sample codes on the internet where regression is performed using deep neural network (at least I could not find any. Please post any helpful link, if available). So, I have tried to merge the tutorials on deep neural networks for classification and regression together for my purpose. As expected, I am bombarded with errors. The error message reads:
InvalidArgumentError: In[0] is not a matrix
[[Node: MatMul_35 = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_36_0, Variable_72/read)]]
The code:
import tensorflow as tf
import numpy
import matplotlib.pyplot as plt
n_nodes_hl1 = 100
n_nodes_hl2 = 100
batch_size = 100
n_input = 1;
n_output = 1;
learning_rate = 0.01
train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
7.042,10.791,5.313,7.997,5.654,9.27,3.1])
train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,
2.827,3.465,1.65,2.904,2.42,2.94,1.3])
x = tf.placeholder('float')
y = tf.placeholder('float')
def neural_network_model(data):
hidden_1_layer = {'weights':tf.Variable(tf.random_normal([n_input, n_nodes_hl1])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))}
hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))}
l1 = tf.add(tf.matmul(data,hidden_1_layer['weights']), hidden_1_layer['biases'])
l1 = tf.nn.relu(l1)
l2 = tf.add(tf.matmul(l1,hidden_2_layer['weights']), hidden_2_layer['biases'])
l2 = tf.nn.relu(l2)
output = tf.reduce_sum(l2)
return output
def train_neural_network(x):
prediction = neural_network_model(x)
cost = tf.square(y - prediction)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
hm_epochs = 5
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
epoch_loss = 0
for (X, Y) in zip(train_X, train_Y):
_, c = sess.run([optimizer, cost], feed_dict={x: X, y: Y})
epoch_loss += c
print('Epoch', epoch, 'completed out of',hm_epochs,'loss:',epoch_loss)
plt.plot(train_X, train_Y, 'ro', label='Original data')
plt.plot(train_X, prediction, label='Fitted line')
plt.legend()
plt.show()
test_X = numpy.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])
test_Y = numpy.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])
print("Testing Data")
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:test_X, y:test_Y}))
train_neural_network(x)
As far I guess there is an issue with the dimensions of the hidden layer weights and/or biases (I may be wrong).
Side note: Here I have just tried to make a simple model of my project where the training and testing data points have been taken from the internet examples. My actual data would be pixel values of several images.
Change this line (working for me) :
Inputs to matmul() functions should be a matrix - you are feeding a value.
_, c = sess.run([optimizer, cost], feed_dict={x: [[X]], y: [[Y]]})
Output:
('Epoch', 0, 'completed out of', 5, 'loss:', array([[ 1.20472407e+14]], dtype=float32))
('Epoch', 1, 'completed out of', 5, 'loss:', array([[ 6.82631159]], dtype=float32))
('Epoch', 2, 'completed out of', 5, 'loss:', array([[ 8.83840561]], dtype=float32))
('Epoch', 3, 'completed out of', 5, 'loss:', array([[ 8.00222397]], dtype=float32))
('Epoch', 4, 'completed out of', 5, 'loss:', array([[ 7.6564579]], dtype=float32))
Hope this helps !
Comment: This is not a good example to explore if you're going to work with images.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With