I am wondering if there is an easy way to check the size/memory needed of a tensorflow graph before running a tensorflow session.
I am looking for something where I can keep changing my system parameters that define the graph and can see how big (in memory) the graph becomes accordingly.
I have done something similar where I wanted to see the number of parameters in my model.
vars = 0
for v in tf.all_variables():
vars += np.prod(v.get_shape().as_list())
print(vars)
Now vars contains the sum of the product of the dimensions of all the variables in your graph. If each variable is of type tf.float32 you can multiply vars by 4 to get the number of bytes consumed by all of the variables. This however is just a lower bound and there will be some additional overhead. Also I think computing the gradients requires a lot of memory since it needs to store the activations at each point in the model for the backwards pass.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With