Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow multiple sessions with multiple GPUs

Tags:

tensorflow

gpu

I have a workstation with 2 GPUs and I am trying to run multiple tensorflow jobs at the same time, so I can train more than one model at once, etc.

For example, I've tried to separate the sessions into different resources via the python API using in script1.py:

with tf.device("/gpu:0"):     # do stuff 

in script2.py:

with tf.device("/gpu:1"):     # do stuff 

in script3.py

with tf.device("/cpu:0"):     # do stuff 

If I run each script by itself I can see that it is using the specified device. (Also the models fit very well into a single GPU and doesn't use another one even if both are available.)

However, if one script is running and I try to run another, I always get this error:

I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 8 I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:909] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow/core/common_runtime/gpu/gpu_init.cc:103] Found device 0 with properties:  name: GeForce GTX 980 major: 5 minor: 2 memoryClockRate (GHz) 1.2155 pciBusID 0000:01:00.0 Total memory: 4.00GiB Free memory: 187.65MiB I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:909] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow/core/common_runtime/gpu/gpu_init.cc:103] Found device 1 with properties:  name: GeForce GTX 980 major: 5 minor: 2 memoryClockRate (GHz) 1.2155 pciBusID 0000:04:00.0 Total memory: 4.00GiB Free memory: 221.64MiB I tensorflow/core/common_runtime/gpu/gpu_init.cc:127] DMA: 0 1  I tensorflow/core/common_runtime/gpu/gpu_init.cc:137] 0:   Y Y  I tensorflow/core/common_runtime/gpu/gpu_init.cc:137] 1:   Y Y  I tensorflow/core/common_runtime/gpu/gpu_device.cc:702] Creating    TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0) I tensorflow/core/common_runtime/gpu/gpu_device.cc:702] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 980, pci bus id: 0000:04:00.0) I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Allocating 187.40MiB bytes. E tensorflow/stream_executor/cuda/cuda_driver.cc:932] failed to allocate 187.40M (196505600 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY F tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Check failed: gpu_mem != nullptr  Could not allocate GPU device memory for device 0. Tried to allocate 187.40MiB Aborted (core dumped) 

It seems each tensorflow process is trying to grab all of the GPUs on the machine when it loads even if not all devices are going to be used to run the model.

I see there is an option to limit the amount of GPU each process uses

tf.GPUOptions(per_process_gpu_memory_fraction=0.5) 

...I haven't tried it, but this seems like it would make two processes try to share 50% of each GPU instead of running each process on a separate GPU...

Does anyone know how to configure tensorflow to use only one GPU and leave the other available for another tensorflow process?

like image 804
j314erre Avatar asked Jan 13 '16 19:01

j314erre


People also ask

Can TensorFlow run on multiple GPU?

TensorFlow provides strong support for distributing deep learning across multiple GPUs. TensorFlow is an open source platform that you can use to develop and train machine learning and deep learning models. TensorFlow operations can leverage both CPUs and GPUs.

How do I use second GPU in TensorFlow?

It takes the first one by default gpu:0/ . When I launch another training data script to run on the second GPU (after doing the changes needed i.e. with tf. device.. ) while keeping the first process running on the first GPU, tensorflow kills the first process and use only the second GPU to run the second process.

Can TensorFlow use both CPU and GPU?

TensorFlow supports multiple CPU's and GPU's usage[3], resulting in computations being run on multiple threads to reduce the overall computation time.

Does CUDA work with multiple GPUs?

To run multiple instances of a single-GPU application on different GPUs you could use CUDA environment variable CUDA_​VISIBLE_​DEVICES. The variable restricts execution to a specific set of devices. To use it, just set CUDA_​VISIBLE_​DEVICES to a comma-separated list of GPU IDs.


1 Answers

TensorFlow will attempt to use (an equal fraction of the memory of) all GPU devices that are visible to it. If you want to run different sessions on different GPUs, you should do the following.

  1. Run each session in a different Python process.
  2. Start each process with a different value for the CUDA_VISIBLE_DEVICES environment variable. For example, if your script is called my_script.py and you have 4 GPUs, you could run the following:

    $ CUDA_VISIBLE_DEVICES=0 python my_script.py  # Uses GPU 0. $ CUDA_VISIBLE_DEVICES=1 python my_script.py  # Uses GPU 1. $ CUDA_VISIBLE_DEVICES=2,3 python my_script.py  # Uses GPUs 2 and 3. 

    Note the GPU devices in TensorFlow will still be numbered from zero (i.e. "/gpu:0" etc.), but they will correspond to the devices that you have made visible with CUDA_VISIBLE_DEVICES.

like image 82
mrry Avatar answered Oct 01 '22 22:10

mrry