In tensorflow tutorials, I see both codes like tf.add(tf.matmul(X, W), b)
and tf.matmul(X, W) + b
, what is the difference between using the math function tf.add()
, tf.assign()
, etc and the operators +
and =
, etc, in precision or other aspects?
add() Function. The tf. add() function returns the addition of two tf. Tensor objects element wise.
TensorFlow core operators The following is an exhaustive list of TensorFlow core operations that are supported by TensorFlow Lite runtime with the Select TensorFlow Ops feature. raw_ops.Abort. raw_ops.Abs. raw_ops.Add. raw_ops.AddN.
Each operation you will do with TensorFlow involves the manipulation of a tensor. There are four main tensor type you can create: tf.
tf. function is a decorator function provided by Tensorflow 2.0 that converts regular python code to a callable Tensorflow graph function, which is usually more performant and python independent. It is used to create portable Tensorflow models.
There's no difference in precision between a+b
and tf.add(a, b)
. The former translates to a.__add__(b)
which gets mapped to tf.add
by means of following line in math_ops.py
_OverrideBinaryOperatorHelper(gen_math_ops.add, "add")
The only difference is that node name in the underlying Graph is add
instead of Add
. You can generally compare things by looking at the underlying Graph representation like this
tf.reset_default_graph() dtype = tf.int32 a = tf.placeholder(dtype) b = tf.placeholder(dtype) c = a+b print(tf.get_default_graph().as_graph_def())
You could also see this directly by inspecting the __add__
method. There's an extra level of indirection because it's a closure, but you can get the underlying function as follows
real_function = tf.Tensor.__add__.im_func.func_closure[0].cell_contents print(real_function.__module__ + "." + real_function.__name__) print(tf.add.__module__ + "." + tf.add.__name__)
And you'll see output below which means that they call same underlying function
tensorflow.python.ops.gen_math_ops.add tensorflow.python.ops.gen_math_ops.add
You can see from tf.Tensor.OVERLOADABLE_OPERATORS
that following Python special methods are potentially overloaded by appropriate TensorFlow versions
{'__abs__', '__add__', '__and__', '__div__', '__floordiv__', '__ge__', '__getitem__', '__gt__', '__invert__', '__le__', '__lt__', '__mod__', '__mul__', '__neg__', '__or__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rfloordiv__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rsub__', '__rtruediv__', '__rxor__', '__sub__', '__truediv__', '__xor__'}
Those methods are described in Python reference 3.3.7: emulating numeric types. Note that Python data model does not provide a way to overload assignment operator =
so assignment always uses native Python implementation.
Yaroslav nicely explained that there is no real difference. I will just add when using tf.add
is beneficial.
tf.add has one important parameter which is name
. It allows you to name the operation in a graph which will be visible in tensorboard. So my rule of thumb, if it will be beneficial to name an operation in tensorboard, I use tf.
equivalent, otherwise I go for brevity and use overloaded version.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With