Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pytorch vs. Keras: Pytorch model overfits heavily

For several days now, I'm trying to replicate my keras training results with pytorch. Whatever I do, the pytorch model will overfit far earlier and stronger to the validation set then in keras. For pytorch I use the same XCeption Code from https://github.com/Cadene/pretrained-models.pytorch.

The dataloading, the augmentation, the validation, the training schedule etc. are equivalent. Am I missing something obvious? There must be a general problem somewhere. I tried thousands of different module constellations, but nothing seems to come even close to the keras training. Can somebody help?

Keras model: val accuracy > 90%

# base model
base_model = applications.Xception(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))

# top model
x = base_model.output
x = GlobalMaxPooling2D()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
predictions = Dense(4, activation='softmax')(x)

# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)

# Compile model
from keras import optimizers
adam = optimizers.Adam(lr=0.0001)
model.compile(loss='categorical_crossentropy', 
optimizer=adam, metrics=['accuracy'])

# LROnPlateau etc. with equivalent settings as pytorch

Pytorch model: val accuracy ~81%

from xception import xception
import torch.nn.functional as F

# modified from https://github.com/Cadene/pretrained-models.pytorch
class XCeption(nn.Module):
    def __init__(self, num_classes):
        super(XCeption, self).__init__()

        original_model = xception(pretrained="imagenet")

        self.features=nn.Sequential(*list(original_model.children())[:-1])
        self.last_linear = nn.Sequential(
             nn.Linear(original_model.last_linear.in_features, 512),
             nn.ReLU(),
             nn.Dropout(p=0.5),
             nn.Linear(512, num_classes)
        )

    def logits(self, features):
        x = F.relu(features)
        x = F.adaptive_max_pool2d(x, (1, 1))
        x = x.view(x.size(0), -1)
        x = self.last_linear(x)
        return x

    def forward(self, input):
        x = self.features(input)
        x = self.logits(x)
        return x 

device = torch.device("cuda")
model=XCeption(len(class_names))
if torch.cuda.device_count() > 1:
    print("Let's use", torch.cuda.device_count(), "GPUs!")
    # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
    model = nn.DataParallel(model)
model.to(device)

criterion = nn.CrossEntropyLoss(size_average=False)
optimizer = optim.Adam(model.parameters(), lr=0.0001)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.2, patience=5, cooldown=5)

Thank you very much!

Update: Settings:

criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.2, patience=5, cooldown=5)

model = train_model(model, train_loader, val_loader, 
                        criterion, optimizer, scheduler, 
                        batch_size, trainmult=8, valmult=10, 
                        num_epochs=200, epochs_top=0)

Cleaned training function:

def train_model(model, train_loader, val_loader, criterion, optimizer, scheduler, batch_size, trainmult=1, valmult=1, num_epochs=None, epochs_top=0):
  for epoch in range(num_epochs):                        
    for phase in ['train', 'val']:
        running_loss = 0.0
        running_acc = 0
        total = 0
        # Iterate over data.
        if phase=="train":
            model.train(True)  # Set model to training mode
            for i in range(trainmult):
                for data in train_loader:
                    # get the inputs
                    inputs, labels = data
                    inputs, labels = inputs.to(torch.device("cuda")), labels.to(torch.device("cuda"))
                    # zero the parameter gradients
                    optimizer.zero_grad()
                    # forward
                    outputs = model(inputs) # notinception
                    _, preds = torch.max(outputs, 1)
                    loss = criterion(outputs, labels)
                    # backward + optimize only if in training phase
                    loss.backward()
                    optimizer.step()
                    # statistics                      
                    total += labels.size(0)
                    running_loss += loss.item()*labels.size(0)
                    running_acc += torch.sum(preds == labels)
                    train_loss=(running_loss/total)
                    train_acc=(running_acc.double()/total)
        else:
            model.train(False)  # Set model to evaluate mode
            with torch.no_grad():
                for i in range(valmult):
                    for data in val_loader:
                        # get the inputs
                        inputs, labels = data
                        inputs, labels = inputs.to(torch.device("cuda")), labels.to(torch.device("cuda"))
                        # zero the parameter gradients
                        optimizer.zero_grad()
                        # forward
                        outputs = model(inputs)
                        _, preds = torch.max(outputs, 1)
                        loss = criterion(outputs, labels.data)
                        # statistics
                        total += labels.size(0)
                        running_loss += loss.item()*labels.size(0)
                        running_acc += torch.sum(preds == labels)
                        val_loss=(running_loss/total)
                        val_acc=(running_acc.double()/total)  
            scheduler.step(val_loss)
    return model
like image 333
Jakob Steinfeldt Avatar asked Apr 28 '18 18:04

Jakob Steinfeldt


People also ask

Is PyTorch harder than keras?

PyTorch vs Keras Keras is better suited for developers who want a plug-and-play framework that lets them build, train, and evaluate their models quickly. Keras also offers more deployment options and easier model export. However, remember that PyTorch is faster than Keras and has better debugging capabilities.

Is PyTorch faster than keras?

PyTorch is faster than Keras. Because Keras provides an additional layer of abstraction between the user and TensorFlow, it will always be innately slower and less scalable.

How do I stop Overfitting in keras?

An easy method is to use the class_weight parameter in model. fit. Look at your validation set and make sure it is not using to many samples from under represented classes. Always best to select the validation set randomly from the overall data set.

Is PyTorch faster than TensorFlow?

The benchmark shows that the performance of PyTorch is better compared to TensorFlow, which can be attributed to the fact that these tools offload most of the computation to the same version of the cuDNN and cuBLAS libraries.


2 Answers

it may be because type of weight initialization you are using otherwise this should not happen try with same initializer in both the models

like image 134
lnx Avatar answered Sep 30 '22 16:09

lnx


self.features=nn.Sequential(*list(original_model.children())[:-1])

Are you sure that this line re-instantiates your model in exactly the same way? You're using a NN.Sequential instead of the original XCeption model's forward function. If there's anything in that forward function that isn't the exact same as using a nn.Sequential, it will not reproduce the same performance.

Instead of wrapping it in a Sequential, you could just change this

my_model = Xception()
# load weights before you change the architecture
my_model = load_weights(path_to_weights)
# overwrite the original's last_linear with your own
my_model.last_linear = nn.Sequential(
             nn.Linear(original_model.last_linear.in_features, 512),
             nn.ReLU(),
             nn.Dropout(p=0.5),
             nn.Linear(512, num_classes)
        )
like image 20
user2831602 Avatar answered Sep 28 '22 16:09

user2831602