During inference, we don't need to keep the activations from the previous layers as we propogate through the network. However, since we are not explicitly telling the program to discard them, it does not differentiate between training and inference passes. Is there a way -perhaps an easy flag,class,method- to do this kind of memory management in Tensorflow? Would simply using tf.stop_gradient
work?
The easiest way is to "freeze" (tensorflow's terminology) your model using their freeze_graph.py
script.
This script basically removes all unnecessary operations, and also replace all variables with constants, then export back the resulting graph on disk.
For this, you need to specify in your graph which are the outputs that you use during inference. Nodes that cannot reach the outputs (likely summaries, losses, gradients and the likes) are automatically discarded.
Once backward passes are eliminated, tensorflow can optimize its memory usage and in particular automatically free or reuse memory taken by unused nodes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With