Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

what is the pytorch equivalent of a tensorflow linear regression?

I am learning pytorch, that to do a basic linear regression on this data created this way here:

from sklearn.datasets import make_regression

x, y = make_regression(n_samples=100, n_features=1, noise=15, random_state=42)
y = y.reshape(-1, 1)
print(x.shape, y.shape)

plt.scatter(x, y)

I know that using tensorflow this code can solve:

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=1, activation='linear', input_shape=(x.shape[1], )))

model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.05), loss='mse')

hist = model.fit(x, y, epochs=15, verbose=0)

but I need to know what the pytorch equivalent would be like, what I tried to do was this:

# Model Class
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.linear = nn.Linear(1,1)
        
    def forward(self, x):
        x = self.linear(x)
        return x
    
    def predict(self, x):
        return self.forward(x)
    
model = Net()

loss_fn = F.mse_loss
opt = torch.optim.SGD(modelo.parameters(), lr=0.05)

# Funcao para treinar
def fit(num_epochs, model, loss_fn, opt, train_dl):
    
    
    # Repeat for given number of epochs
    for epoch in range(num_epochs):
        
        # Train with batches of data
        for xb, yb in train_dl:
            
            # 1. Generate predictions
            pred = model(xb)
            
            # 2. Calculate Loss
            loss = loss_fn(pred, yb)
            
            # 3. Campute gradients
            loss.backward()
            
            # 4. Update parameters using gradients
            opt.step()
            
            # 5. Reset the gradients to zero
            opt.zero_grad()
            
        # Print the progress
        if (epoch+1) % 10 == 0:
            print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))

# Training
fit(200, model, loss_fn, opt, data_loader)

But the model doesn't learn anything, I don't know what I can do anymore.

The input/output dimensions is (1/1)

like image 333
manynothing Avatar asked Sep 10 '20 13:09

manynothing


People also ask

Does TensorFlow have linear regression?

Linear Regression is one of the fundamental machine learning algorithms used to predict a continuous variable using one or more explanatory variables (features). In this tutorial, you will learn how to implement a simple linear regression in Tensorflow 2.0 using the Gradient Tape API.

Is TensorFlow more efficient than PyTorch?

The benchmark shows that the performance of PyTorch is better compared to TensorFlow, which can be attributed to the fact that these tools offload most of the computation to the same version of the cuDNN and cuBLAS libraries.

Is PyTorch similar to TensorFlow?

PyTorch was developed by Facebook and was first publicly released in 2016. It was created to offer production optimizations similar to TensorFlow while making models easier to write.


1 Answers

Dataset

First of all, you should define torch.utils.data.Dataset

import torch
from sklearn.datasets import make_regression


class RegressionDataset(torch.utils.data.Dataset):
    def __init__(self):
        data = make_regression(n_samples=100, n_features=1, noise=0.1, random_state=42)
        self.x = torch.from_numpy(data[0]).float()
        self.y = torch.from_numpy(data[1]).float()

    def __len__(self):
        return len(self.x)

    def __getitem__(self, index):
        return self.x[index], self.y[index]

It converts numpy data to PyTorch's tensor inside __init__ and converts data to float (numpy has double by default while PyTorch's default is float in order to use less memory).

Apart from that it will simply return tuple of features and respective regression targets.

Fit

Almost there, but you have to flatten output from the model (described below). torch.nn.Linear will return tensors of shape (batch, 1) while your targets are of shape (batch,). flatten() will remove unnecessary 1 dimension.

# 2. Calculate Loss
loss = criterion(pred.flatten(), yb)

Model

That is all you need actually:

model = torch.nn.Linear(1, 1)

Any layer can be called directly, no need for forward and inheritance for simple models.

Calling

The rest is almost okay, you just have to create torch.utils.data.DataLoader and pass instance of our dataset. What DataLoader does is it issues __getitem__ of dataset multiple times and creates a batch of specified size (there is some other funny business, but that's the idea):

dataset = RegressionDataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32)
model = torch.nn.Linear(1, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=3e-4)

fit(5000, model, criterion, optimizer, dataloader)

Also notice I've used torch.nn.MSELoss(), as we are passing object it looks better than function in this case.

Whole code

To make it easier:

import torch
from sklearn.datasets import make_regression


class RegressionDataset(torch.utils.data.Dataset):
    def __init__(self):
        data = make_regression(n_samples=100, n_features=1, noise=0.1, random_state=42)
        self.x = torch.from_numpy(data[0]).float()
        self.y = torch.from_numpy(data[1]).float()

    def __len__(self):
        return len(self.x)

    def __getitem__(self, index):
        return self.x[index], self.y[index]


# Funcao para treinar
def fit(num_epochs, model, criterion, optimizer, train_dl):
    # Repeat for given number of epochs
    for epoch in range(num_epochs):

        # Train with batches of data
        for xb, yb in train_dl:

            # 1. Generate predictions
            pred = model(xb)

            # 2. Calculate Loss
            loss = criterion(pred.flatten(), yb)

            # 3. Compute gradients
            loss.backward()

            # 4. Update parameters using gradients
            optimizer.step()

            # 5. Reset the gradients to zero
            optimizer.zero_grad()

        # Print the progress
        if (epoch + 1) % 10 == 0:
            print(
                "Epoch [{}/{}], Loss: {:.4f}".format(epoch + 1, num_epochs, loss.item())
            )


dataset = RegressionDataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32)
model = torch.nn.Linear(1, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=3e-4)

fit(5000, model, criterion, optimizer, dataloader)

You should get around 0.053 loss or so, vary noise or other params for harder/easier regression task.

like image 162
Szymon Maszke Avatar answered Oct 16 '22 02:10

Szymon Maszke