I trained an Inception_v3 for my task. I have 3 classes.
After training I try to test my trained model and I use following code to load my model:
model = models.inception_v3(pretrained=True, aux_logits=False)
model.fc = nn.Linear(model.fc.in_features, 3)
model.load_state_dict(torch.load(My Model Path.pth))
Download a pretrained Inception_v3, change output features and load my weight in this model. I obtained very good results as I expect from validation phase.
If I use the same code but with pretrained=False the test go very bad.
model = models.inception_v3(pretrained=False, aux_logits=False)
model.fc = nn.Linear(model.fc.in_features, 3)
model.load_state_dict(torch.load(My Model Path.pth))
Since in the model I download I load my weights, there should be no difference between pretrained True or False.
Does anyone know what changes?
The pretrained=True has an additional effect on the inception_v3 model: it controls whether or not the input will be
preprocessed according to the method with which it was trained on ImageNet
(source-code here).
When you set pretrained=False, if you want to make things comparable at test time, you should also set transform_input=True:
model = models.inception_v3(pretrained=False, aux_logits=False, transform_input=True)
model.fc = nn.Linear(model.fc.in_features, 3)
model.load_state_dict(torch.load(My Model Path.pth))
In case you're wondering, this is the preprocessing:
def _transform_input(self, x: Tensor) -> Tensor:
if self.transform_input:
x_ch0 = torch.unsqueeze(x[:, 0], 1) * (0.229 / 0.5) + (0.485 - 0.5) / 0.5
x_ch1 = torch.unsqueeze(x[:, 1], 1) * (0.224 / 0.5) + (0.456 - 0.5) / 0.5
x_ch2 = torch.unsqueeze(x[:, 2], 1) * (0.225 / 0.5) + (0.406 - 0.5) / 0.5
x = torch.cat((x_ch0, x_ch1, x_ch2), 1)
return x
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With