I want to run hyperparameter tuning for a Neural Style Transfer algorithm which results in having a for-loop in which my model outputs an image generated with different hyperparameters per iteration.
It is running in Google Colaboratory using GPU runtime. At runtime, I get at some point an error that says that my GPU memory is almost full and then the program stops.
So I was thinking maybe there is a way to clear or reset the GPU memory after some specific number of iterations so that the program can normally terminate (going through all the iterations in the for-loop, not just e.g. 1500 of 3000 because of full GPU memory)
I already tried this piece of code which I find somewhere online:
# Reset Keras Session
def reset_keras():
sess = get_session()
clear_session()
sess.close()
sess = get_session()
try:
del classifier # this is from global space - change this as you need
except:
pass
#print(gc.collect()) # if it's done something you should see a number being outputted
# use the same config as you used to create the session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 1
config.gpu_options.visible_device_list = "0"
set_session(tf.Session(config=config))
You may run the command "!nvidia-smi" inside a cell in the notebook, and kill the process id for the GPU like "!kill process_id". Try using simpler data structures, like dictionaries, vectors.
if you are using pytorch, run the command torch.cuda.clear_cache
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With