When does tensorflow update weights and biases in the for loop?
Below is the code from tf's github. mnist_softmax.py
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
sess.run()
? If so, Does it mean, in this program, tf update weights and biases 1000 times?train_step
.Tensorflow works by creating a graph of the computations that are required for computing the output of a network. Each of the basic operations like matrix multiplication, addition, anything you can think of are nodes in this computation graph. In the tensorflow mnist example that you are following, the lines from 40-46 define the network architecture
The network represents a simple linear regression model where the prediction is made using y = W*x + b
(see line 43).
Next, you configure the training procedure for your network. This code uses cross-entropy as the loss function to minimize (see line 57). The minimization is done using the gradient descent algorithm (see line 59).
At this point, your network is fully constructed. Now you need to run these nodes so that actual computation if performed (no computation has been performed up till this point).
In the loop where sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
is executed, tf computes the value of train_step
which causes the GradientDescentOptimizer to try to minimize the cross_entropy
and this is how training progresses.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With