While adding loss in Pytorch, I have the same function in torch.nn.Functional as well as in torch.nn. what is the difference ?
torch.nn.CrossEntropyLoss() and torch.nn.functional.cross_entropy
Putting same text from PyTorch discussion forum @Alban D has given answer to similar question.
F.cross entropy vs torch.nn.Cross_Entropy_Loss
There isn’t much difference for losses. The main difference between the
nn.functional.xxxand thenn.Xxxis that one has a state and one does not.
This means that for a linear layer for example, if you use the functional version, you will need to handle the weights yourself (including passing them to the optimizer or moving them to the gpu) while thenn.Xxxversion will do all of that for you with.parameters()or.to(device).For loss functions, as no parameters are needed (in general), you won’t find much difference. Except for example, if you use cross entropy with some weighting between your classes, using the
nn.CrossEntropyLoss()module, you will give your weights only once while creating the module and then use it. If you were using the functional version, you will need to pass the weights every single time you will use it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With