Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras: real amount of GPU memory used

I'm using Keras with Tensorflow backend and looking at nvidia-smi is not sufficient to understand how much memory current network architecture need because seems like Tensorflow just allocate all availible memory.

So the question is how to find out real GPU memory usage?

like image 986
mrgloom Avatar asked May 18 '17 14:05

mrgloom


People also ask

Does keras automatically use GPU?

If your system has an NVIDIA® GPU and you have the GPU version of TensorFlow installed then your Keras code will automatically run on the GPU.

How do I limit GPU usage in keras?

One way to restrict reserving all GPU RAM in tensorflow is to grow the amount of reservation. This method will allow you to train multiple NN using same GPU but you cannot set a threshold on the amount of memory you want to reserve.

How do you check if GPU is being used keras?

Checking Your GPU Availability With Keras The easiest way to check if you have access to GPUs is to call tf. config. experimental. list_physical_devices('GPU').

How do I limit GPU memory usage in TensorFlow?

Limiting GPU memory growth To limit TensorFlow to a specific set of GPUs, use the tf. config. set_visible_devices method. In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as is needed by the process.


2 Answers

It can be done using Timeline, which can give you a full trace about memory logging. Similar to the code below:

from keras import backend as K
from tensorflow.python.client import timeline
import tensorflow as tf


with K.get_session()  as s:
    run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
    run_metadata = tf.RunMetadata()
     
    # your fitting code and s run with run_options 

    to = timeline.Timeline(run_metadata.step_stats)
    trace = to.generate_chrome_trace_format()
    with open('full_trace.json', 'w') as out:
            out.write(trace)

If you want to limit the gpu memory usage, it can alse be done from gpu_options. Like the following code:

import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.2
set_session(tf.Session(config=config))

Check the following documentation about the Timeline object

As you use TensorFlow in the backend, you can use tfprof profiling tool

like image 188
orabis Avatar answered Sep 25 '22 11:09

orabis


You can still use nvidia-smi after telling TensorFlow not to reserve all memory of the GPU, but to grow this reservation on demand:

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
keras.backend.tensorflow_backend.set_session(tf.Session(config=config))
like image 29
gizzmole Avatar answered Sep 24 '22 11:09

gizzmole