In a general tensorflow setup like
model = construct_model()
with tf.Session() as sess:
train_model(sess)
Where construct_model()
contains the model definition including random initialization of weights (tf.truncated_normal
) and train_model(sess)
executes the training of the model -
Which seeds do I have to set where to ensure 100% reproducibility between repeated runs of the code snippet above? The documentation for tf.random.set_random_seed
may be concise, but left me a bit confused. I tried:
tf.set_random_seed(1234)
model = construct_model()
with tf.Session() as sess:
train_model(sess)
But got different results each time.
The below snippet of code provides an example of how to obtain reproducible results: import numpy as np import tensorflow as tf import random as python_random # The below is necessary for starting Numpy generated random numbers # in a well-defined initial state. np. random.
A random seed is used to ensure that results are reproducible. In other words, using this parameter makes sure that anyone who re-runs your code will get the exact same outputs.
Writing deterministic models. You can make your models deterministic by enabling op determinism. This means that you can train a model and finish each run with exactly the same trainable variables. This also means that the inferences of your previously-trained model will be exactly the same on each run.
The best solution which works as of today with GPU is to install tensorflow-determinism with the following:
pip install tensorflow-determinism
Then include the following code to your code
import tensorflow as tf
import os
os.environ['TF_DETERMINISTIC_OPS'] = '1'
source: https://github.com/NVIDIA/tensorflow-determinism
One possible reason is that when constructing the model, there are some code using numpy.random module. So maybe you can try to set the seed for numpy, too.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With