Can TensorFlow automatically cache computations if they involve multiple calls to the same computation (sub-)graph?
For example, I have a matrix F
in which each entry represents a
computation based on trainable variables W
. My objective function
multiplies this matrix several times with different vectors (each
time with unchanged W).
Will TensorFlow recompute, for example, F[1,2]
whenever I access
it, or will it cache that value?
In theory, one could precompute the matrix F
given a fixed W
,
such that each entry in F
is a tf.constant
. But that would
prevent the correct computation of the gradients of W
.
TensorFlow performs a limited amount of caching, but it probably doesn't cover the case that you describe.
If you create a tf.Session
with the following options, constant folding will be enabled:
config = tf.ConfigProto(graph_options=tf.GraphOptions(
optimizer_options=tf.OptimizerOptions(opt_level=tf.OptimizerOptions.L2)))
sess = tf.Session(config=config)
When you call sess.run()
with this configuration, TensorFlow will evaluate the appropriate nodes to run, then identify the subgraph of those nodes whose outputs are constant, evaluate them, and cache the results. Therefore, it will avoid re-executing redundant computation.
However, in your question you mention that F
is a function of some trainable variables. From TensorFlow's point of view, these variables are volatile—they may change at any time—so it does not cache values that are derived from these variables. If you want to reuse the same value for F
multiple times, you could consider storing it in a tf.constant()
so that the constant folding optimization is more useful.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With