I'm using Pytorch to classify a series of images. The NN is defined as follows:
model = models.vgg16(pretrained=True)
model.cuda()
for param in model.parameters(): param.requires_grad = False
classifier = nn.Sequential(OrderedDict([
                           ('fc1', nn.Linear(25088, 4096)),
                           ('relu', nn.ReLU()),
                           ('fc2', nn.Linear(4096, 102)),
                           ('output', nn.LogSoftmax(dim=1))
                           ]))
model.classifier = classifier
The criterions and optimizers are as follows:
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
My validation function is as follows:
def validation(model, testloader, criterion):
    test_loss = 0
    accuracy = 0
    for images, labels in testloader:
        images.resize_(images.shape[0], 784)
        output = model.forward(images)
        test_loss += criterion(output, labels).item()
        ps = torch.exp(output)
        equality = (labels.data == ps.max(dim=1)[1])
        accuracy += equality.type(torch.FloatTensor).mean()
    return test_loss, accuracy
This is the piece of code that is throwing the following error:
RuntimeError: input has less dimensions than expected
epochs = 3
print_every = 40
steps = 0
running_loss = 0
testloader = dataloaders['test']
# change to cuda
model.to('cuda')
for e in range(epochs):
    running_loss = 0
    for ii, (inputs, labels) in enumerate(dataloaders['train']):
        steps += 1
        inputs, labels = inputs.to('cuda'), labels.to('cuda')
        optimizer.zero_grad()
        # Forward and backward passes
        outputs = model.forward(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
        if steps % print_every == 0:
            model.eval()
            with torch.no_grad():
                test_loss, accuracy = validation(model, testloader, criterion)
            print("Epoch: {}/{}.. ".format(e+1, epochs),
                  "Training Loss: {:.3f}.. ".format(running_loss/print_every),
                  "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
                  "Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
            running_loss = 0
Any help?
one liner to get accuracy acc == (true == mdl(x). max(1). item() / true. size(0) assuming 0th dimension is the batch size and 1st dimension hold the logits/raw values for classification labels.
Just in case it helps someone.
If you don't have a GPU system (say you are developing on a laptop and will eventually test on a server with GPU) you can do the same using:
if torch.cuda.is_available():
        inputs =inputs.to('cuda')
    else:
        inputs = inputs.to('cuda')
Also, if you are wondering why there is a LogSoftmax, instead of Softmax that is because he is using NLLLoss as his loss function. You can read more about softmax here
I needed to change the validation function as follows:
def validation(model, testloader, criterion):
    test_loss = 0
    accuracy = 0
    for inputs, classes in testloader:
        inputs = inputs.to('cuda')
        output = model.forward(inputs)
        test_loss += criterion(output, labels).item()
        ps = torch.exp(output)
        equality = (labels.data == ps.max(dim=1)[1])
        accuracy += equality.type(torch.FloatTensor).mean()
    return test_loss, accuracy
inputs need to be converted to 'cuda': inputs.to('cuda')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With