Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Parallel way of applying function element-wise to a Pytorch CUDA Tensor

Suppose I have a torch CUDA tensor and I want to apply some function like sin() but I have explicitly defined the function F. How can I use parallel computation to apply F in Pytorch.

like image 870
Abhinav Singh Avatar asked Jun 08 '17 10:06

Abhinav Singh


People also ask

What is torch Cuda synchronize ()?

torch.cuda. synchronize (device=None)[source] Waits for all kernels in all streams on a CUDA device to complete.

What does torch Cuda Is_available () do?

cuda. is_available. Returns a bool indicating if CUDA is currently available.


1 Answers

I think currently, it is not possible to explicit parallelize a function on a CUDA-Tensor. A possible solution could be, you can define a Function like the for example the non-linear activation functions. So you can feed forward it through the Net and your function.

The drawback is, it probably don't work, because you have to define a CUDA-Function and have to recompile pytorch.

like image 137
loose11 Avatar answered Jan 07 '23 04:01

loose11