I am finetuning resnet on my dataset which has multiple labels.
I would like to convert the 'scores' of the classification layer to probabilities and use those probabilities to calculate the loss at the training.
Could you give an example code for this? Can I use like this:
P = net.forward(x)
p = torch.nn.functional.softmax(P, dim=1)
loss = torch.nn.functional.cross_entropy(P, y)
I am unclear whether this is the correct way or not as I am passing probabilities as the input to crossentropy loss.
So, you are training a model
i.e resnet with cross-entropy in pytorch. Your loss calculation would look like this.
logit = model(x)
loss = torch.nn.functional.cross_entropy(logits=logit, target=y)
In this case, you can calculate the probabilities of all classes by doing,
logit = model(x)
p = torch.nn.functional.softmax(logit, dim=1)
# to calculate loss using probabilities you can do below
loss = torch.nn.functional.nll_loss(torch.log(p), y)
Note that if you use probabilities you will have to manually take a log
, which is bad for numerical reasons. Instead, either use log_softmax
or cross_entropy
in which case you may end up computing losses using cross entropy and computing probability separately.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With