Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Convert image to proper dimension PyTorch

I have an input image, as numpy array of shape [H, W, C] where H - height, W - width and C - channels.
I want to convert it into [B, C, H, W] where B - batch size, which should be equal to 1 every time, and changing the place for C.

_image = np.array(_image)
h, w, c = _image.shape
image = torch.from_numpy(_image).unsqueeze_(0).view(1, c, h, w)

So, will this preserve the image properly i.e without displacing the original image pixel values?

like image 433
Arpit Kathuria Avatar asked Oct 18 '25 16:10

Arpit Kathuria


1 Answers

I'd prefer the following, which leaves the original image unmodified and simply adds a new axis as desired:

_image = np.array(_image)
image = torch.from_numpy(_image)
image = image[np.newaxis, :] 
# _unsqueeze works fine here too

Then to swap the axes as desired:

image = image.permute(0, 3, 1, 2)
# permutation applies the following mapping
# axis0 -> axis0
# axis1 -> axis3
# axis2 -> axis1
# axis3 -> axis2
like image 178
twolffpiggott Avatar answered Oct 20 '25 05:10

twolffpiggott



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!