Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Accessing PyTorch GPU matrix from TensorFlow directly

I have a neural network written in PyTorch, that outputs some Tensor a on GPU. I would like to continue processing a with a highly efficient TensorFlow layer.

As far as I know, the only way to do this is to move a from GPU memory to CPU memory, convert to numpy, and then feed that into TensorFlow. A simplified example:

import torch
import tensorflow as tf

# output of some neural network written in PyTorch
a = torch.ones((10, 10), dtype=torch.float32).cuda()

# move to CPU / pinned memory
c = a.to('cpu', non_blocking=True)

# setup TensorFlow stuff (only needs to happen once)
sess = tf.Session()
c_ph = tf.placeholder(tf.float32, shape=c.shape)
c_mean = tf.reduce_mean(c_ph)

# run TensorFlow
print(sess.run(c_mean, feed_dict={c_ph: c.numpy()}))

This is a bit far fetched maybe but is there a way to make it so that either

  1. a never leaves GPU memory, or
  2. a goes from GPU memory to Pinned Memory to GPU memory.

I attempted 2. in the code snipped above using non_blocking=True but I am not sure if it does what I expect (i.e. move it to pinned memory).

Ideally, my TensorFlow graph would operate directly on the memory occupied by the PyTorch tensor, but I supposed that is not possible?

like image 409
fabian789 Avatar asked Jan 30 '19 15:01

fabian789


1 Answers

I am not familiar with tensorflow, but you may use pyTorch to expose the "internals" of a tensor.
You can access the underlying storage of a tensor

a.storage()

Once you have the storage, you can get a pointer to the memory (either CPU or GPU):

a.storage().data_ptr()

You can check if it is pinned or not

a.storage().is_pinned()

And you can pin it

a.storage().pin_memory()

I am not familiar with interfaces between pyTorch and tensorflow, but I came across an example of a package (FAISS) directly accessing pytorch tensors in GPU.

like image 147
Shai Avatar answered Oct 29 '22 17:10

Shai