I've got a dataset of images that looks like this:
array([[[[0.35980392, 0.26078431, 0.14313725],
[0.38137255, 0.26470588, 0.15196078],
[0.51960784, 0.3745098 , 0.26176471],
...,
[0.34313725, 0.22352941, 0.15 ],
[0.30784314, 0.2254902 , 0.15686275],
[0.28823529, 0.22843137, 0.16862745]],
[[0.38627451, 0.28235294, 0.16764706],
[0.45098039, 0.32843137, 0.21666667],
[0.62254902, 0.47254902, 0.36470588],
...,
[0.34607843, 0.22745098, 0.15490196],
[0.30686275, 0.2245098 , 0.15588235],
[0.27843137, 0.21960784, 0.16176471]],
[[0.41568627, 0.30098039, 0.18431373],
[0.51862745, 0.38529412, 0.27352941],
[0.67745098, 0.52058824, 0.40980392],
...,
[0.34901961, 0.22941176, 0.15588235],
[0.29901961, 0.21666667, 0.14901961],
[0.26078431, 0.20098039, 0.14313725]],
...,
What I need is convert it to tensor so that I could pass it to CNN. I'm trying to do it like that:
from torchvision import transforms as transforms
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
How can I apply this transform to my dataset? Thanks for any help.
you probably want to create a dataloader. You will need a class which iterates over your dataset, you can do that like this:
import torch
import torchvision.transforms
class YourDataset(torch.utils.data.Dataset):
def __init__(self):
# load your dataset (how every you want, this example has the dataset stored in a json file
with open(<dataset-path>, "r") as f:
self.dataset = json.load(f)
def __getitem__(self, idx):
sample = self.dataset[idx]
data, label = sample[0], sample[1]
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
return transform(data), torch.tensor(label)
def __len__(self):
return len(self.dataset)
Now you can create a dataloader:
train_set = YourDataset()
train_dataloader = torch.utils.data.DataLoader(
train_set,
batch_size=64,
num_workers=1,
shuffle=True,
)
And now you can iterator over the dataloader in your train-loop:
for samples, labels in self.train_set:
. . .
# samples will hold N samples of your dataset where N is the batchsize
If you need more explanation, take a look a pytorchs documentation on this topic.
After being initialized a torchvision transform can be called on a PIL image or torch.Tensor depending on the transform (in the documentation you can find precisely what is expected by a given transform). So assuming your transform pipeline is valid (output type for each transform is compatible with input type of following transform) then you can simply call it: transform(data).
This should work with your data since transforms.ToTensor can accept a numpy.ndarray as input
Alternatively, if your dataset is a little more complex than a simple NumPy array. You could implement your own Dataset class which would handle: image fetching in file system, performing relevant transformations, etc... Here's a rough guideline:
class MyDataset(Dataset):
def __init__(self):
super(MyDataset, self).__init__()
# define your data for example a list of images
self.data = ...
# define your transform pipline
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
def __getitem__(self, index):
path = self.data[index]
img = Image.open(path)
return self.transform(x)
Where Image is imported from PIL.
For you case, though, you would have something a little simpler:
class MyDataset(Dataset):
def __init__(self, data):
super(MyDataset, self).__init__()
self.data = data
# define your transform pipline
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
def __getitem__(self, index):
x = self.data[index]
return self.transform(x)
And you would pass your numpy.ndarray to the MyDataset's initializer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With