I was playing with Tensorflow examples of building a linear regression, and my codes are below:
import numpy as np
import tensorflow as tf
train_X = np.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,7.042,10.791,5.313,7.997,5.654,9.27,3.1])
train_Y = np.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,2.827,3.465,1.65,2.904,2.42,2.94,1.3])
n_samples = train_X.shape[0]
batch_size = 100
total_epochs = 50
X = tf.placeholder('float')
y = tf.placeholder('float')
W = tf.Variable(np.random.randn(), name="weights")
b = tf.Variable(np.random.randn(), name="bias")
y_pred = tf.add(tf.mul(X, W), b)
cost = tf.reduce_sum(tf.pow(y_pred-y, 2))/(2*n_samples) #L2 loss
optimizer = tf.train.AdamOptimizer().minimize(cost) #Gradient
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
print("Initia values for W and b: ", W.eval(), b.eval())
for _ in range(total_epochs):
sess.run(optimizer, feed_dict={X: x, y: y})
print("Value for W and b after GD: ", W.eval(), b.eval())
However, running the above will give me this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-185d8e05cbcd> in <module>()
28 for _ in range(total_epochs):
29 for (x, y) in zip(train_X, train_Y):
---> 30 sess.run(optimizer, feed_dict={X: x, y: y})
31 print("Value for W and b after GD: ", W.eval(), b.eval())
/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
338 try:
339 result = self._run(None, fetches, feed_dict, options_ptr,
--> 340 run_metadata_ptr)
341 if run_metadata:
342 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
540 except Exception as e:
541 raise TypeError('Cannot interpret feed_dict key as Tensor: '
--> 542 + e.args[0])
543
544 if isinstance(subfeed_val, ops.Tensor):
TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a float64 into a Tensor.
After digging deeper I realized the bug was here:
feed_dict={X: x, y: y}
where the key-value pair am using is the same ('y' and 'y'). And if I changed it to Y:y, and modified the rest accordingly:
Y = tf.placeholder('float')
cost = tf.reduce_sum(tf.pow(y_pred-Y, 2))/(2*n_samples) #L2 loss
sess.run(optimizer, feed_dict={X: x, Y: y})
The codes will run perfectly.
Am quite wondering why I couldn't use the same symbol for the key-value pair in feed_dict? Shouldn't the 'y' on the left (the key) refer to the 'y' in the cost function above?
The feed_dict
argument is a dictionary that needs Tensor as keys. In your corrected example, X
and Y
are those Tensors.
However, if you use X
or Y
for the name of another variable, you will overwrite the initial Tensors and X
or Y
will no longer correspond to the Tensor from your graph. Tensorflow cannot understand that you refer to the nodes from your graph as they have been overwritten.
In a nutshell, you are trying to use the same name for two different variables, which is impossible.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With