We had only one GPU installed with CUDA drivers and whenever one user runs the code, the whole memory is assigned to that user. And the other users are unable to use the GPU. Is there a way to get rid of this behavior?
If you are using keras
, add this at the beginning of your script:
from keras import backend as K
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
K.set_session(sess)
This will prevent tensorflow
to take all the memory as can be seen here.
If you are using tensorflow
without keras
, add this:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)
As shown here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With