Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

In tensorflow what is the difference between tf.add and operator (+)?

Tags:

tensorflow

In tensorflow tutorials, I see both codes like tf.add(tf.matmul(X, W), b) and tf.matmul(X, W) + b, what is the difference between using the math function tf.add(), tf.assign(), etc and the operators + and =, etc, in precision or other aspects?

like image 285
platinor Avatar asked Jun 18 '16 19:06

platinor


People also ask

What does TF add do?

add() Function. The tf. add() function returns the addition of two tf. Tensor objects element wise.

What are TensorFlow operators?

TensorFlow core operators The following is an exhaustive list of TensorFlow core operations that are supported by TensorFlow Lite runtime with the Select TensorFlow Ops feature. raw_ops.Abort. raw_ops.Abs. raw_ops.Add. raw_ops.AddN.

What is TensorFlow how many types of tensors are there?

Each operation you will do with TensorFlow involves the manipulation of a tensor. There are four main tensor type you can create: tf.

What does TF mean in Python?

tf. function is a decorator function provided by Tensorflow 2.0 that converts regular python code to a callable Tensorflow graph function, which is usually more performant and python independent. It is used to create portable Tensorflow models.


2 Answers

There's no difference in precision between a+b and tf.add(a, b). The former translates to a.__add__(b) which gets mapped to tf.add by means of following line in math_ops.py

_OverrideBinaryOperatorHelper(gen_math_ops.add, "add")

The only difference is that node name in the underlying Graph is add instead of Add. You can generally compare things by looking at the underlying Graph representation like this

tf.reset_default_graph() dtype = tf.int32 a = tf.placeholder(dtype) b = tf.placeholder(dtype) c = a+b print(tf.get_default_graph().as_graph_def()) 

You could also see this directly by inspecting the __add__ method. There's an extra level of indirection because it's a closure, but you can get the underlying function as follows

real_function = tf.Tensor.__add__.im_func.func_closure[0].cell_contents print(real_function.__module__ + "." + real_function.__name__) print(tf.add.__module__ + "." + tf.add.__name__) 

And you'll see output below which means that they call same underlying function

tensorflow.python.ops.gen_math_ops.add tensorflow.python.ops.gen_math_ops.add 

You can see from tf.Tensor.OVERLOADABLE_OPERATORS that following Python special methods are potentially overloaded by appropriate TensorFlow versions

{'__abs__',  '__add__',  '__and__',  '__div__',  '__floordiv__',  '__ge__',  '__getitem__',  '__gt__',  '__invert__',  '__le__',  '__lt__',  '__mod__',  '__mul__',  '__neg__',  '__or__',  '__pow__',  '__radd__',  '__rand__',  '__rdiv__',  '__rfloordiv__',  '__rmod__',  '__rmul__',  '__ror__',  '__rpow__',  '__rsub__',  '__rtruediv__',  '__rxor__',  '__sub__',  '__truediv__',  '__xor__'} 

Those methods are described in Python reference 3.3.7: emulating numeric types. Note that Python data model does not provide a way to overload assignment operator = so assignment always uses native Python implementation.

like image 200
Yaroslav Bulatov Avatar answered Sep 21 '22 23:09

Yaroslav Bulatov


Yaroslav nicely explained that there is no real difference. I will just add when using tf.add is beneficial.

tf.add has one important parameter which is name. It allows you to name the operation in a graph which will be visible in tensorboard. So my rule of thumb, if it will be beneficial to name an operation in tensorboard, I use tf. equivalent, otherwise I go for brevity and use overloaded version.

like image 45
Salvador Dali Avatar answered Sep 20 '22 23:09

Salvador Dali