Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fast GPU computation on PyTorch sparse tensor

Is it possible to do operations on each row of a PyTorch MxN tensor, but only at certain indices (for instance nonzero) to save time?

I'm particularly interested in the case of M and N very large where only a few elements on each row aren't null.

(Toy example) From this large tensor:

Large = Tensor([[0, 1, 3, 0, 0, 0],
                [0, 0, 0, 0, 5, 0],
                [1, 0, 0, 5, 0, 1]])

I'd like to use something like the following smaller "tensor":

irregular_tensor = [ [1, 3],
                     [5],
                     [1, 5, 1]]

and do the same exact computation on each row (for instance involving torch.cumsum and torch.exp) to obtain an output of size Mx1.

Is there a way to do that?

like image 694
nissmar Avatar asked Mar 18 '26 19:03

nissmar


1 Answers

You might be interested in the Torch Sparse functionality. You can convert a PyTorch Tensor to a PyTorch Sparse tensor using the to_sparse() method of the Tensor class.

You can then access a tensor that contains all the indices in Coordinate format by the Sparse Tensor's indices() method, and a tensor that contains the associated values by the Sparse Tensor's values() method.

This also has the benefit of saving you memory in terms of storing the tensor.

There is some functionality for using other Torch functions on Sparse Tensors, but this is limited.

Also: be aware that this part of the API is still in Beta and subject to changes.

like image 103
cosmict Avatar answered Mar 21 '26 09:03

cosmict