What is the most efficient way to flatten a 2D tensor which is actually a horizontal or vertical vector into a 1D tensor?
Is there a difference in terms of performance between:
tf.reshape(w, [-1])
and
tf.squeeze(w)
?
squeeze(w) only squeezes the first layer in the case of a multilayer tensor, whereas tf. reshape(w,[-1]) will flatten the entire tensor regardless of depth.
To flatten the tensor, we're going to use the TensorFlow reshape operation. So tf. reshape, we pass in our tensor currently represented by tf_initial_tensor_constant, and then the shape that we're going to give it is a -1 inside of a Python list.
A tensor can be flattened into a one-dimensional tensor by reshaping it using the method torch. flatten(). This method supports both real and complex-valued input tensors. It takes a torch tensor as its input and returns a torch tensor flattened into one dimension.
Both tf.reshape(w, [-1])
and tf.squeeze(w)
are "cheap" in that they operate only on the metadata (i.e. the shape) of the given tensor, and don't modify the data itself. Of the two tf.reshape()
has slightly simpler logic internally, but the performance of the two should be indistinguishable.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With