Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is volatile variable in Pytorch

Tags:

What is volatile attribute of a Variable in Pytorch? Here's a sample code for defining a variable in PyTorch.

datatensor = Variable(data, volatile=True) 
like image 796
satya Avatar asked Apr 15 '18 01:04

satya


People also ask

What are volatile variables?

A volatile variable is a variable that is marked or cast with the keyword "volatile" so that it is established that the variable can be changed by some outside factor, such as the operating system or other software.

What is the purpose of a volatile variable?

The volatile modifier is used to let the JVM know that a thread accessing the variable must always merge its own private copy of the variable with the master copy in the memory. Accessing a volatile variable synchronizes all the cached copied of the variables in the main memory.

What does variable do in PyTorch?

A PyTorch Variable is a wrapper around a PyTorch Tensor, and represents a node in a computational graph. If x is a Variable then x. data is a Tensor giving its value, and x. grad is another Variable holding the gradient of x with respect to some scalar value.

How do you declare a volatile variable?

To declare a variable volatile, include the keyword volatile before or after the data type in the variable definition.


2 Answers

Basically, set the input to a network to volatile if you are doing inference only and won't be running backpropagation in order to conserve memory.

From the docs:

Volatile is recommended for purely inference mode, when you’re sure you won’t be even calling .backward(). It’s more efficient than any other autograd setting - it will use the absolute minimal amount of memory to evaluate the model. volatile also determines that requires_grad is False.

Edit: The volatile keyword has been deprecated as of pytorch version 0.4.0

like image 117
decrispell Avatar answered Oct 10 '22 18:10

decrispell


For versions of Pytorch previous to 0.4.0, Variable and Tensor were two different entities. For variables, you could specify two flags: volatile and require_grad. Both of them were used for fine grained exclusion of subgraphs from gradient computation.

The difference between volatile and requires_grad is in how the flag is applied to the outputs of an operation. If there is even a single volatile = True Variable as input to an operation, its output is also going to be marked as volatile. For requires_grad, you need all the inputs to that operation to be flagged requires_grad = False, so that the output is also flagged in the same way.

From Pytorch 0.4.0, Tensors and Variables have merged, and the volatile flag is deprecated.

like image 39
Jadiel de Armas Avatar answered Oct 10 '22 18:10

Jadiel de Armas