What is the use of the function tf.train.get_global_step()
in TensorFlow?
In machine learning concepts what is it equivalent to?
The TFRecord format is a simple format for storing a sequence of binary records. Protocol buffers are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by . proto files, these are often the easiest way to understand a message type. The tf.
global_step refers to the number of batches seen by the graph. Every time a batch is provided, the weights are updated in the direction that minimizes the loss. global_step just keeps track of the number of batches seen so far.
You could use it to restart training exactly where you left off when the training procedure has been stopped for some reason. Of course you can always restart training without knowing the global_step
(if you save checkpoints regularly in your code, that is), but unless you somehow keep track of how many iterations you already performed, you will not know how many iterations are left after the restart. Sometimes you really want your model to be trained exactly n
iterations and not n
plus unknown amount before crash
. So in my opinion, this is more of a practicality than a theoretical machine learning concept.
tf.train.get_global_step()
return global step(variable, tensor from variable node or None) through get_collection(tf.GraphKeys.GLOBAL_STEP)
or get_tensor_by_name('global_step:0')
global step is widely used in learn rate decay(like tf.train.exponential_decay
, see Decaying the learning rate for more information).
You can pass global step to optimzer apply_gradients or minimize method to increment by one.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With