Is there a way to run TensorFlow purely on the CPU. All of the memory on my machine is hogged by a separate process running TensorFlow. I have tried setting the per_process_memory_fraction to 0, unsuccessfully.
Playing with the CUDA_VISIBLE_DEVICES environment variable is one of if not the way to go whenever you have GPU-tensorflow installed and you don't want to use any GPUs. You to want either export CUDA_VISIBLE_DEVICES= or alternatively use a virtualenv with a non-GPU installation of TensorFlow.
If a TensorFlow operation has both CPU and GPU implementations, TensorFlow will automatically place the operation to run on a GPU device first. If you have more than one GPU, the GPU with the lowest ID will be selected by default. However, TensorFlow does not place operations into multiple GPUs automatically.
is_gpu_available tells if the gpu is available. tf. test. gpu_device_name returns the name of the gpu device.
Have a look to this question or this answer.
To summarise you can add this piece of code:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
import tensorflow as tf
Playing with the CUDA_VISIBLE_DEVICES
environment variable is one of if not the way to go whenever you have GPU-tensorflow installed and you don't want to use any GPUs.
You to want either
export CUDA_VISIBLE_DEVICES=
or alternatively use a virtualenv with a non-GPU installation of TensorFlow.
You can use only CPUs by openning a session with a GPU limit of 0:
sess = tf.Session(config=tf.ConfigProto(device_count={'GPU': 0}))
See https://www.tensorflow.org/api_docs/python/tf/ConfigProto for more details.
A proof that it works for @Nicolas:
In Python, write:
import tensorflow as tf
sess_cpu = tf.Session(config=tf.ConfigProto(device_count={'GPU': 0}))
Then in a terminal:
nvidia-smi
You will see something like:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 24869 C /.../python 99MiB |
+-----------------------------------------------------------------------------+
Then repeat the process: In Python, write:
import tensorflow as tf
sess_gpu = tf.Session()
Then in a terminal:
nvidia-smi
You will see something like:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 25900 C /.../python 5775MiB |
+-----------------------------------------------------------------------------+
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With