def dropout(input, p=0.5, training=True, inplace=False)
inplace: If set to
True
, will do this operation in-place.
I would like to ask what is the meaning of in-place in dropout. What does it do? Any performance changes when performing these operation?
Thanks
Keeping inplace=True
will itself drop few values in the tensor input
itself, whereas if you keep inplace=False
, you will to save the result of droput(input)
in some other variable to be retrieved.
Example:
import torch
import torch.nn as nn
inp = torch.tensor([1.0, 2.0, 3, 4, 5])
outplace_dropout = nn.Dropout(p=0.4)
print(inp)
output = outplace_dropout(inp)
print(output)
print(inp) # Notice that the input doesn't get changed here
inplace_droput = nn.Dropout(p=0.4, inplace=True)
inplace_droput(inp)
print(inp) # Notice that the input is changed now
PS: This is not related to what you have asked but try not using input
as a variable name since input
is a Python keyword. I am aware that Pytorch docs also does that, and it is kinda funny.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With