I'm trying to build a simple neural network in Tensorflow, but I have a question about gradient optimization.
It might be a naive question, but do I have to set conditions to stop the optimizer? Below is a sample printout from my network and you can see that after iteration (batch gradient descent of all data) 66, the cost begins to increase again. So is it up to me to make sure the optimizer stops at this point? (note: I didn't put all the output here, but the cost begins to increase exponentially as the number of iterations increase).
Thanks for any guidance.
iteration 64 with average cost of 654.621 and diff of 0.462708
iteration 65 with average cost of 654.364 and diff of 0.257202
iteration 66 with average cost of 654.36 and diff of 0.00384521
iteration 67 with average cost of 654.663 and diff of -0.302368
iteration 68 with average cost of 655.328 and diff of -0.665161
iteration 69 with average cost of 656.423 and diff of -1.09497
iteration 70 with average cost of 658.011 and diff of -1.58826
That's correct - the TensorFlow tf.train.Optimizer
classes expose an operation that you can run to take one (gradient descent-style) step, but they do not monitor the current value of the cost or decide when to stop, so you may see increasing cost once the network begins to overfit.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With