Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's difference between tf.sub and just minus operation in tensorflow?

I am trying to use Tensorflow. Here is an very simple code.

train = tf.placeholder(tf.float32, [1], name="train")
W1 = tf.Variable(tf.truncated_normal([1], stddev=0.1), name="W1")
loss = tf.pow(tf.sub(train, W1), 2)
step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

Just ignore the optimization part (4th line). It will take a floating number and train W1 so as to increase squared difference.

My question is simple. If I use just minus sign instead of tf.sub" as below, what is different? Will it cause a wrong result?

loss = tf.pow(train-W1, 2)

When I replace it, the result looks the same. If they are the same, why do we need to use the "tf.add/tf.sub" things?

Built-in back propagation calculation can be done only by the "tf.*" things?

like image 400
YW P Kwon Avatar asked Mar 20 '16 06:03

YW P Kwon


People also ask

How do you subtract in Tensorflow?

Syntax: tf.subtract(x, y, name=None) Must be one of the following types: bfloat16 , half , float32 , float64 , uint8 , int8 , uint16 , int16 , int32 , int64 , complex64 , complex128 , uint32 , uint64 . A Tensor . Must have the same type as x . A name for the operation (optional).

Is tf variable trainable?

tf. GradientTape watches trainable variables by default: with tf.

How do you transpose in Tensorflow?

transpose(x, perm=[1, 0]) . As above, simply calling tf. transpose will default to perm=[2,1,0] . To take the transpose of the matrices in dimension-0 (such as when you are transposing matrices where 0 is the batch dimension), you would set perm=[0,2,1] .

What does tensor mean in Tensorflow?

In Tensorflow, all the computations involve tensors. A tensor is a vector or matrix of n-dimensions that represents all types of data. All values in a tensor hold identical data type with a known (or partially known) shape. The shape of the data is the dimensionality of the matrix or array.


2 Answers

Yes, - and + resolve to tf.sub ad tf.add. If you look at the tensorflow code you will see that these operators on tf.Variable are overloaded with the tf.* methods.

As to why both exists I assume the tf.* ones exist for consistency. So sub and say matmul operation can be used in the same way. While the operator overloading is for convenience.

like image 73
Daniel Slater Avatar answered Oct 09 '22 15:10

Daniel Slater


(tf.sub appears to have been replaced with tf.subtract)

The only advantage I see is that you can specify a name of the operation as in:

tf.subtract(train, W1, name='foofoo')

This helps identify the operation causing an error as the name you provide is also shown:

ValueError: Dimensions must be equal, but are 28 and 40 for 'foofoo' (op: 'Sub') with input shapes

it may also help with TensorBoard understanding. It might be overkill for most people as python also shows the line number which triggered the error.

like image 3
Robert Lugg Avatar answered Oct 09 '22 16:10

Robert Lugg