Does it only protect against asynchronous updates or does it also cause other access to the variable to wait for the update? I'm using the same model for training and inference at the same time and want to make sure that inference is always done on a consistent model.
Passing use_locking=True
when creating a TensorFlow optimizer, or a variable assignment op, causes a lock to be acquired around the relevant updates to the variable. Other optimizers/assignments on the same variable also created with use_locking=True
will be serialized.
However, there are two caveats that you should bear in mind when using this option:
Reads to the variables are not performed under the lock, so it is possible to see intermediate states and partially-applied updates. Serializing reads requires additional coordination, such as that provided by tf.train.SyncReplicasOptimizer
.
Writes (optimizers/assignments) to the same variable with use_locking=False
are still possible, and will not acquire the lock. The programmer is responsible for ensuring that these writes do not occur.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With