Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is TensorFlow using my GPU when the device is set to the CPU

TensorFlow is using allocating all of my GPU memory and ignoring my commands to use the CPU, how can I fix this?

Here's the code excerpt of my testprog

Session *session;
SessionOptions opts = SessionOptions();

//force to allocate 0 memory on gpu
opts.config.mutable_gpu_options()->set_per_process_gpu_memory_fraction(0);
opts.config.mutable_gpu_options()->set_allow_growth(false);

//create session with these settings
TF_CHECK_OK(NewSession(opts, &session));
TF_CHECK_OK(session->Create(graph_def));

//set device to cpu
graph::SetDefaultDevice("/cpu:0", &graph_def);

//run arbitrary model
Status status = session->Run(classifierInput, {output_layer},{},&outputs);

TF_CHECK_OK(session->Close());

Calling nvidi-smi shows me:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro P4000        Off  | 0000:01:00.0     Off |                  N/A |
| N/A   50C    P0    28W /  N/A |   7756MiB /  8114MiB |     42%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1784    G   /usr/bin/X                                     139MiB |
|    0      3828    G   qtcreator                                       28MiB |
|    0      7721    C   ...testprog/build/testprog                    7585MiB |
+-----------------------------------------------------------------------------+

Why is that so?

like image 250
user3085931 Avatar asked Feb 09 '18 08:02

user3085931


2 Answers

Since this question is tagged with C++. The solution is

tensorflow::Session *sess;
tensorflow::SessionOptions options;

tensorflow::ConfigProto* config = &options.config;
// disabled GPU entirely
(*config->mutable_device_count())["GPU"] = 0;
// place nodes somewhere
config->set_allow_soft_placement(true);

See the example here. And my other post, how TensorFlow places the nodes.

edit: There is GitHub issue. You can try:

#include <stdlib.h>
setenv("CUDA_VISIBLE_DEVICES", "", 1);

or

auto gpu_options = config->gpu_options();
gpu_options.set_visible_device_list("");

But this might give you failed call to cuInit: CUDA_ERROR_NO_DEVICE.

like image 76
Patwie Avatar answered Nov 14 '22 22:11

Patwie


When you set the parameters to cpu:1 it does not prevent tensorflow from initializing the GPU device.

session_conf = tf.ConfigProto(
    device_count={'CPU' : 1, 'GPU' : 0},
    allow_soft_placement=True,
    log_device_placement=False
)

Also... last resort:

alias nogpu='export CUDA_VISIBLE_DEVICES=-1;'

nogpu python disable_GPU_tensorflow.py

or

setenv("CUDA_VISIBLE_DEVICES", "", 1);

like image 38
Panos Kal. Avatar answered Nov 14 '22 23:11

Panos Kal.