I want to specify the gpu to run my process. And I set it as follows:
import tensorflow as tf with tf.device('/gpu:0'): a = tf.constant(3.0) with tf.Session() as sess: while True: print sess.run(a)
However it still allocate memory in both my two gpus.
| 0 7479 C python 5437MiB | 1 7479 C python 5437MiB
If you have an Nvidia graphics card, open the Nvidia control panel. In the left pane, select Manage 3D settings. In the right pane, under Global Settings tab, click on the drop-down menu under Preferred Graphics Processor. Select the graphics card you wish to set as default, then click Apply to enforce the changes.
By default, TensorFlow will use our available GPU devices.
import tensorflow as tf. if tf. test. gpu_device_name(): print('Default GPU Device: {}'.
There are 3 ways to achieve this:
Using CUDA_VISIBLE_DEVICES
environment variable. by setting environment variable CUDA_VISIBLE_DEVICES="1"
makes only device 1 visible and by setting CUDA_VISIBLE_DEVICES="0,1"
makes devices 0 and 1 visible. You can do this in python by having a line os.environ["CUDA_VISIBLE_DEVICES"]="0,1"
after importing os
package.
Using with tf.device('/gpu:2')
and creating the graph. Then it will use GPU device 2 to run.
Using config = tf.ConfigProto(device_count = {'GPU': 1})
and then sess = tf.Session(config=config)
. This will use GPU device 1.
TF would allocate all available memory on each visible GPU if not told otherwise. Here are 5 ways to stick to just one (or a few) GPUs.
Bash solution. Set CUDA_VISIBLE_DEVICES=0,1
in your terminal/console before starting python or jupyter notebook:
CUDA_VISIBLE_DEVICES=0,1 python script.py
Python solution. run next 2 lines of code before constructing a session
import os os.environ["CUDA_VISIBLE_DEVICES"]="0,1"
Automated solution. Method below will automatically detect GPU devices that are not used by other scripts and set CUDA_VISIBLE_DEVICES for you. You have to call mask_unused_gpus
before constructing a session. It will filter out GPUs by current memory usage. This way you can run multiple instances of your script at once without changing your code or setting console parameters.
The function:
import subprocess as sp import os def mask_unused_gpus(leave_unmasked=1): ACCEPTABLE_AVAILABLE_MEMORY = 1024 COMMAND = "nvidia-smi --query-gpu=memory.free --format=csv" try: _output_to_list = lambda x: x.decode('ascii').split('\n')[:-1] memory_free_info = _output_to_list(sp.check_output(COMMAND.split()))[1:] memory_free_values = [int(x.split()[0]) for i, x in enumerate(memory_free_info)] available_gpus = [i for i, x in enumerate(memory_free_values) if x > ACCEPTABLE_AVAILABLE_MEMORY] if len(available_gpus) < leave_unmasked: raise ValueError('Found only %d usable GPUs in the system' % len(available_gpus)) os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(map(str, available_gpus[:leave_unmasked])) except Exception as e: print('"nvidia-smi" is probably not installed. GPUs are not masked', e) mask_unused_gpus(2)
Limitations: if you start multiple scripts at once it might cause a collision, because memory is not allocated immediately when you construct a session. In case it is a problem for you, you can use a randomized version as in original source code: mask_busy_gpus()
Tensorflow 2.0 suggest yet another method:
gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: # Restrict TensorFlow to only use the first GPU try: tf.config.experimental.set_visible_devices(gpus[0], 'GPU') except RuntimeError as e: # Visible devices must be set at program startup print(e)
Tensorflow/Keras also allows to specify gpu to be used with session config. I can recommend it only if setting environment variable is not an options (i.e. an MPI run). Because it tend to be the least reliable of all methods, especially with keras.
config = tf.ConfigProto() config.gpu_options.visible_device_list = "0,1" with tf.Session(config) as sess: #or K.set_session(tf.Session(config))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With