What is volatile attribute of a Variable in Pytorch? Here's a sample code for defining a variable in PyTorch.
datatensor = Variable(data, volatile=True)
A volatile variable is a variable that is marked or cast with the keyword "volatile" so that it is established that the variable can be changed by some outside factor, such as the operating system or other software.
The volatile modifier is used to let the JVM know that a thread accessing the variable must always merge its own private copy of the variable with the master copy in the memory. Accessing a volatile variable synchronizes all the cached copied of the variables in the main memory.
A PyTorch Variable is a wrapper around a PyTorch Tensor, and represents a node in a computational graph. If x is a Variable then x. data is a Tensor giving its value, and x. grad is another Variable holding the gradient of x with respect to some scalar value.
To declare a variable volatile, include the keyword volatile before or after the data type in the variable definition.
Basically, set the input to a network to volatile if you are doing inference only and won't be running backpropagation in order to conserve memory.
From the docs:
Volatile is recommended for purely inference mode, when you’re sure you won’t be even calling .backward(). It’s more efficient than any other autograd setting - it will use the absolute minimal amount of memory to evaluate the model. volatile also determines that requires_grad is False.
Edit: The volatile keyword has been deprecated as of pytorch version 0.4.0
For versions of Pytorch
previous to 0.4.0, Variable
and Tensor
were two different entities. For variables, you could specify two flags: volatile
and require_grad
. Both of them were used for fine grained exclusion of subgraphs from gradient computation.
The difference between volatile
and requires_grad
is in how the flag is applied to the outputs of an operation. If there is even a single volatile = True
Variable as input to an operation, its output is also going to be marked as volatile
. For requires_grad
, you need all the inputs to that operation to be flagged requires_grad = False
, so that the output is also flagged in the same way.
From Pytorch
0.4.0, Tensors
and Variables
have merged, and the volatile
flag is deprecated.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With