Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow: How to reduce memory footprint for inference only models?

During inference, we don't need to keep the activations from the previous layers as we propogate through the network. However, since we are not explicitly telling the program to discard them, it does not differentiate between training and inference passes. Is there a way -perhaps an easy flag,class,method- to do this kind of memory management in Tensorflow? Would simply using tf.stop_gradient work?

like image 422
Atila Orhon Avatar asked Oct 18 '25 07:10

Atila Orhon


1 Answers

The easiest way is to "freeze" (tensorflow's terminology) your model using their freeze_graph.py script.

This script basically removes all unnecessary operations, and also replace all variables with constants, then export back the resulting graph on disk.

For this, you need to specify in your graph which are the outputs that you use during inference. Nodes that cannot reach the outputs (likely summaries, losses, gradients and the likes) are automatically discarded.

Once backward passes are eliminated, tensorflow can optimize its memory usage and in particular automatically free or reuse memory taken by unused nodes.

like image 159
P-Gn Avatar answered Oct 22 '25 04:10

P-Gn