Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PyTorch equivalent of `numpy.unpackbits`?

Tags:

pytorch

I am training a neural net on GPU. It uses a lot of binary input features.

Since moving data to/from GPU is expensive, I am looking for ways to make the initial representation more compact. Now, I encode my features as int8, move them over to GPU and then expand as float32:

# create int8
features = torch.zeros(*dims, dtype=torch.int8)

# fill in some data (set some features to 1.)
…

# move int8 to GPU
features = features.to(device=cuda, non_blocking=True)

# expand int8 as float32
features = features.to(dtype=float32)

Now, I am looking for ways to compress those binary features to bits instead of bytes.

NumPy has functions packbits and unpackbits

>>> a = np.array([[2], [7], [23]], dtype=np.uint8)
>>> b = np.unpackbits(a, axis=1)
>>> b
array([[0, 0, 0, 0, 0, 0, 1, 0],
       [0, 0, 0, 0, 0, 1, 1, 1],
       [0, 0, 0, 1, 0, 1, 1, 1]], dtype=uint8)

Is there any way to unpack bits in PyTorch on GPU?

like image 371
Konstantin Druzhkin Avatar asked Sep 10 '19 12:09

Konstantin Druzhkin


People also ask

What is tensor in PyTorch?

PyTorch: Tensors A PyTorch Tensor is basically the same as a numpy array: it does not know anything about deep learning or computational graphs or gradients, and is just a generic n-dimensional array to be used for arbitrary numeric computation.


2 Answers

There is no similar functions at the time of writing this answer. However, a workaround is using torch.from_numpy as in:

In[2]: import numpy as np
In[3]: a = np.array([[2], [7], [23]], dtype=np.uint8)
In[4]: b = np.unpackbits(a, axis=1)
In[5]: b
Out[5]: 
array([[0, 0, 0, 0, 0, 0, 1, 0],
       [0, 0, 0, 0, 0, 1, 1, 1],
       [0, 0, 0, 1, 0, 1, 1, 1]], dtype=uint8)
In[6]: import torch
In[7]: torch.from_numpy(b)
Out[7]: 
tensor([[0, 0, 0, 0, 0, 0, 1, 0],
        [0, 0, 0, 0, 0, 1, 1, 1],
        [0, 0, 0, 1, 0, 1, 1, 1]], dtype=torch.uint8)
like image 87
ndrwnaguib Avatar answered Sep 27 '22 21:09

ndrwnaguib


You can use DLPack to convert your Pytorch tensor to CuPy array, then use cupy.unpackbits:

import cupy
import torch

from torch.utils.dlpack import to_dlpack
from torch.utils.dlpack import from_dlpack

# Create a PyTorch tensor.
tx = torch.cuda.ByteTensor([1, 2, 3, 4])

# Convert it into a DLPack tensor.
dx = to_dlpack(tx)

# Convert it into a CuPy array.
cx = cupy.fromDlpack(dx)

# Unpack bits (does not support axis, so flatten/reshape as needed)
cx_bits = cupy.unpackbits(cx).reshape(-1, 8)

# Convert it back to a PyTorch tensor.
tx_bits = from_dlpack(cx_bits.toDlpack())

UPDATE: I'm not actually sure DLPack is necessary:

>>> t = torch.cuda.ByteTensor([[2], [22], [222]])
>>> t_bits = torch.as_tensor(cupy.unpackbits(cupy.asarray(t)).reshape(-1, 8), device="cuda")
>>>
>>> t_bits
tensor([[0, 0, 0, 0, 0, 0, 1, 0],
        [0, 0, 0, 1, 0, 1, 1, 0],
        [1, 1, 0, 1, 1, 1, 1, 0]], device='cuda:0', dtype=torch.uint8)
like image 29
MichaelSB Avatar answered Sep 27 '22 20:09

MichaelSB