Given is an array a
:
a = np.arange(1, 11, dtype = 'float32')
With numpy, I can do the following:
np.divide(1.0, a, out = a)
Resulting in:
array([1. , 0.5 , 0.33333334, 0.25 , 0.2 ,
0.16666667, 0.14285715, 0.125 , 0.11111111, 0.1 ],
dtype=float32)
Assuming that a
is instead a pytorch tensor, the following operation fails:
torch.div(1.0, a, out = a)
The first parameter of div
is expected to be a tensor of matching length/shape.
If I substitute 1.0
with an array b
filled with ones, its length equal to the length of a
, it works. The downside is that I have to allocate memory for b
. I can also do something like a = 1.0 / a
which will yet again allocate extra (temporary) memory.
How can I do this operation efficiently "in-place" (without the allocation of extra memory), ideally with broadcasting?
Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see tolist() .
It'll modify the tensor metadata and will not create a copy of it.
For . view() pytorch expects the new shape to be provided by individual int arguments (represented in the doc as *shape ). The asterisk ( * ) can be used in python to unpack a list into its individual elements, thus passing to view the correct form of input arguments it expects.
Pytorch follows the convention of using _
for in-place operations.
for eg
add -> add_ # in-place equivalent
div -> div_ # in-place equivalent
etc
Element-by-element inplace inverse.
>>> a = torch.arange(1, 11, dtype=torch.float32)
>>> a.pow_(-1)
>>> a
>>> tensor([1.0000, 0.5000, 0.3333, 0.2500, 0.2000, 0.1667, 0.1429, 0.1250, 0.1111, 0.1000])
>>> a = torch.arange(1, 11, dtype=torch.float32)
>>> a.div_(a ** a)
>>> a
>>> tensor([1.0000, 0.5000, 0.3333, 0.2500, 0.2000, 0.1667, 0.1429, 0.1250, 0.1111, 0.1000])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With