I had a look at this tutorial in the PyTorch docs for understanding Transfer Learning. There was one line that I failed to understand.
After the loss is calculated using loss = criterion(outputs, labels)
, the running loss is calculated using running_loss += loss.item() * inputs.size(0)
and finally, the epoch loss is calculated using running_loss / dataset_sizes[phase]
.
Isn't loss.item()
supposed to be for an entire mini-batch (please correct me if I am wrong). i.e, if the batch_size
is 4, loss.item()
would give the loss for the entire set of 4 images. If this is true, why is loss.item()
being multiplied with inputs.size(0)
while calculating running_loss
? Isn't this step like an extra multiplication in this case?
Any help would be appreciated. Thanks!
loss. item() returns the value as a standard Python number and moves the data to the CPU. It converts the value into a plain python number and a plain python number can only live on the CPU.
If you would like to calculate the loss for each epoch, divide the running_loss by the number of batches and append it to train_losses in each epoch. Accuracy is the number of correct classifications / the total amount of classifications.
Training Loss Computationally, the training loss is calculated by taking the sum of errors for each example in the training set. It is also important to note that the training loss is measured after each batch. This is usually visualized by plotting a curve of the training loss.
Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see tolist() .
It's because the loss given by CrossEntropy
or other loss functions is divided by the number of elements i.e. the reduction parameter is mean
by default.
torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')
Hence, loss.item()
contains the loss of entire mini-batch, but divided by the batch size. That's why loss.item()
is multiplied with batch size, given by inputs.size(0)
, while calculating running_loss
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With