Here's the problem: My (Keras)model is listening to a task queue. If no task arrives in 10 min, I want to unload the model and free the memory.
But I never thought such a job would be so hard...
Here are some failed tries:
(1) Set model = None
, hope GC collect the memory.
(2) del model
(3) Use K.clear_session()
, tf.reset_defualt_graph()
.
(4) Any combination of above methods followed by calling gc.collect()
manually.
Is it possible to unload a model from memory without exiting current process? Any other suggestions?
numba library
Assuming you are using device 0
from numba import cuda
cuda.select_device(0)
cuda.close()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With