A very simple example in tensorflow: min (x + 1)^2
where x
is a scalar. The code is:
import tensorflow as tf
x = tf.Variable(initial_value=3.0)
add = tf.add(x, 1)
y = tf.square(add)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(y)
Then write the graph to disk:
graph = tf.get_default_graph()
writer = tf.summary.FileWriter("some/dir/to/write/events")
writer.add_graph(graph=graph)
Finally, visualizing it in tensorboard, it looks like this:
The question is, why is the "Add" node connected with the gradients? I think since I am trying to minimize y, the "Square" node should be. Is this a bug? Can anyone explain it?
There is no bug involved. You just need to understand what is a gradient and know how to compute one yourself. So (x+1)^2' = 2*(x+1)
. Which means that you do not need to calculate (x+1)^2
to calculate the gradient. If you will zoom in the gradient part you will see that it calculated the gradient of your square and figured out that it does which part of the graph is needed there :
Here is a more interesting and more intuitive example:
import tensorflow as tf
x = tf.Variable(initial_value=3.0)
y = tf.cos(x)
train = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(y)
with tf.Session() as sess:
writer = tf.summary.FileWriter('logs', sess.graph)
writer.close()
You should know that cos(x)' = - sin(x)
. Which means that only x
is needed to calculate gradient. And this is what you see in the graph:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With