TensorFlow is using allocating all of my GPU memory and ignoring my commands to use the CPU, how can I fix this?
Here's the code excerpt of my testprog
Session *session;
SessionOptions opts = SessionOptions();
//force to allocate 0 memory on gpu
opts.config.mutable_gpu_options()->set_per_process_gpu_memory_fraction(0);
opts.config.mutable_gpu_options()->set_allow_growth(false);
//create session with these settings
TF_CHECK_OK(NewSession(opts, &session));
TF_CHECK_OK(session->Create(graph_def));
//set device to cpu
graph::SetDefaultDevice("/cpu:0", &graph_def);
//run arbitrary model
Status status = session->Run(classifierInput, {output_layer},{},&outputs);
TF_CHECK_OK(session->Close());
Calling nvidi-smi
shows me:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66 Driver Version: 375.66 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro P4000 Off | 0000:01:00.0 Off | N/A |
| N/A 50C P0 28W / N/A | 7756MiB / 8114MiB | 42% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1784 G /usr/bin/X 139MiB |
| 0 3828 G qtcreator 28MiB |
| 0 7721 C ...testprog/build/testprog 7585MiB |
+-----------------------------------------------------------------------------+
Why is that so?
Since this question is tagged with C++. The solution is
tensorflow::Session *sess;
tensorflow::SessionOptions options;
tensorflow::ConfigProto* config = &options.config;
// disabled GPU entirely
(*config->mutable_device_count())["GPU"] = 0;
// place nodes somewhere
config->set_allow_soft_placement(true);
See the example here. And my other post, how TensorFlow places the nodes.
edit: There is GitHub issue. You can try:
#include <stdlib.h>
setenv("CUDA_VISIBLE_DEVICES", "", 1);
or
auto gpu_options = config->gpu_options();
gpu_options.set_visible_device_list("");
But this might give you failed call to cuInit: CUDA_ERROR_NO_DEVICE
.
When you set the parameters to cpu:1 it does not prevent tensorflow from initializing the GPU device.
session_conf = tf.ConfigProto(
device_count={'CPU' : 1, 'GPU' : 0},
allow_soft_placement=True,
log_device_placement=False
)
Also... last resort:
alias nogpu='export CUDA_VISIBLE_DEVICES=-1;'
nogpu python disable_GPU_tensorflow.py
or
setenv("CUDA_VISIBLE_DEVICES", "", 1);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With