Based on the documentation, the default GPU is the one with the lowest id:
If you have more than one GPU in your system, the GPU with the lowest ID will be selected by default.
Is it possible to change this default from command line or one line of code?
To limit TensorFlow to a specific set of GPUs, use the tf. config. set_visible_devices method. In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as is needed by the process.
Then, TensorFlow runs operations on your GPUs by default. You can control how TensorFlow uses CPUs and GPUs: Logging operations placement on specific CPUs or GPUs. Instructing TensorFlow to run certain operations in a specific “device context”—a CPU or a specific GPU, if there are multiple GPUs on the machine.
Suever's answer correctly shows how to pin your operations to a particular GPU. However, if you are running multiple TensorFlow programs on the same machine, it is recommended that you set the CUDA_VISIBLE_DEVICES
environment variable to expose different GPUs before starting the processes. Otherwise, TensorFlow will attempt to allocate almost the entire memory on all of the available GPUs, which prevents other processes from using those GPUs (even if the current process isn't using them).
Note that if you use CUDA_VISIBLE_DEVICES
, the device names "/gpu:0"
, "/gpu:1"
, etc. refer to the 0th and 1st visible devices in the current process.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With