Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using Dropout in Pytorch: nn.Dropout vs. F.dropout

By using pyTorch there is two ways to dropout torch.nn.Dropout and torch.nn.functional.Dropout.

I struggle to see the difference between the use of them:

  • When to use what?
  • Does it make a difference?

I don't see any performance difference when I switched them around.

like image 668
CutePoison Avatar asked Nov 21 '18 19:11

CutePoison


People also ask

What is dropout layer in PyTorch?

class torch.nn. Dropout (p=0.5, inplace=False)[source] During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.

What is inplace in dropout?

Keeping inplace=True will itself drop few values in the tensor input itself, whereas if you keep inplace=False , you will to save the result of droput(input) in some other variable to be retrieved.

What is dropout 2d?

Dropout2d() will help promote independence between feature maps and should be used instead. p (float, optional) – probability of an element to be zero-ed. inplace (bool, optional) – If set to True , will do this operation in-place.


Video Answer


2 Answers

The technical differences have already been shown in the other answer. However the main difference is that nn.Dropout is a torch Module itself which bears some convenience:

A short example for illustration of some differences:

import torch import torch.nn as nn  class Model1(nn.Module):     # Model 1 using functional dropout     def __init__(self, p=0.0):         super().__init__()         self.p = p      def forward(self, inputs):         return nn.functional.dropout(inputs, p=self.p, training=True)  class Model2(nn.Module):     # Model 2 using dropout module     def __init__(self, p=0.0):         super().__init__()         self.drop_layer = nn.Dropout(p=p)      def forward(self, inputs):         return self.drop_layer(inputs) model1 = Model1(p=0.5) # functional dropout  model2 = Model2(p=0.5) # dropout module  # creating inputs inputs = torch.rand(10) # forwarding inputs in train mode print('Normal (train) model:') print('Model 1', model1(inputs)) print('Model 2', model2(inputs)) print()  # switching to eval mode model1.eval() model2.eval()  # forwarding inputs in evaluation mode print('Evaluation mode:') print('Model 1', model1(inputs)) print('Model 2', model2(inputs)) # show model summary print('Print summary:') print(model1) print(model2) 

Output:

Normal (train) model: Model 1 tensor([ 1.5040,  0.0000,  0.0000,  0.8563,  0.0000,  0.0000,  1.5951,          0.0000,  0.0000,  0.0946]) Model 2 tensor([ 0.0000,  0.3713,  1.9303,  0.0000,  0.0000,  0.3574,  0.0000,          1.1273,  1.5818,  0.0946])  Evaluation mode: Model 1 tensor([ 0.0000,  0.3713,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,          0.0000,  0.0000,  0.0000]) Model 2 tensor([ 0.7520,  0.1857,  0.9651,  0.4281,  0.7883,  0.1787,  0.7975,          0.5636,  0.7909,  0.0473]) Print summary: Model1() Model2(   (drop_layer): Dropout(p=0.5) ) 

So which should I use?

Both are completely equivalent in terms of applying dropout and even though the differences in usage are not that big, there are some reasons to favour the nn.Dropout over nn.functional.dropout:

Dropout is designed to be only applied during training, so when doing predictions or evaluation of the model you want dropout to be turned off.

The dropout module nn.Dropout conveniently handles this and shuts dropout off as soon as your model enters evaluation mode, while the functional dropout does not care about the evaluation / prediction mode.

Even though you can set functional dropout to training=False to turn it off, it is still not such a convenient solution like with nn.Dropout.

Also the drop rate is stored in the module, so you don't have to save it in an extra variable. In larger networks you might want to create different dropout layers with different drop rates - here nn.Dropout may increase readability and can bear also some convenience when using the layers multiple times.

Finally, all modules which are assigned to your model are registered in your model. So you model class keeps track of them, that is why you can just turn off the dropout module by calling eval(). When using the functional dropout your model is not aware of it, thus it won't appear in any summary.

like image 102
MBT Avatar answered Sep 29 '22 04:09

MBT


If you look at the source code of nn.Dropout and Functional.Dropout, you can see Functional is an interface and nn module implement functions with respect to this interface.
Look at the implementations in nn class:

from .. import functional as F class Dropout(_DropoutNd):     def forward(self, input):         return F.dropout(input, self.p, self.training, self.inplace)  class Dropout2d(_DropoutNd):     def forward(self, input):         return F.dropout2d(input, self.p, self.training, self.inplace) 

And so on.

Implementation of Functional class:

def dropout(input, p=0.5, training=False, inplace=False):     return _functions.dropout.Dropout.apply(input, p, training, inplace)  def dropout2d(input, p=0.5, training=False, inplace=False):     return _functions.dropout.FeatureDropout.apply(input, p, training, inplace) 

look at the example below to understand:

class Net(nn.Module):     def __init__(self):         super(Net, self).__init__()         self.conv1 = nn.Conv2d(1, 10, kernel_size=5)         self.conv2 = nn.Conv2d(10, 20, kernel_size=5)         self.conv2_drop = nn.Dropout2d()         self.fc1 = nn.Linear(320, 50)         self.fc2 = nn.Linear(50, 10)      def forward(self, x):         x = F.relu(F.max_pool2d(self.conv1(x), 2))         x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))         x = x.view(-1, 320)         x = F.relu(self.fc1(x))         x = F.dropout(x, training=self.training)         x = self.fc2(x)         return F.log_softmax(x) 

There is a F.dropout in forward() function and a nn.Dropout in __init__() function. Now this is the explanation:

In PyTorch you define your Models as subclasses of torch.nn.Module.

In the init function, you are supposed to initialize the layers you want to use. Unlike keras, Pytorch goes more low level and you have to specify the sizes of your network so that everything matches.

In the forward method, you specify the connections of your layers. This means that you will use the layers you already initialized, in order to re-use the same layer for each forward pass of data you make.

torch.nn.Functional contains some useful functions like activation functions a convolution operations you can use. However, these are not full layers so if you want to specify a layer of any kind you should use torch.nn.Module.

You would use the torch.nn.Functional conv operations to define a custom layer for example with a convolution operation, but not to define a standard convolution layer.

like image 28
M. Doosti Lakhani Avatar answered Sep 29 '22 04:09

M. Doosti Lakhani