tf.set_random_seed(SEED)
has no effect that I can tell...For example, running the code below several times inside an IPython notebook produces different output each time:
import tensorflow as tf
tf.set_random_seed(42)
sess = tf.InteractiveSession()
a = tf.constant([1, 2, 3, 4, 5])
tf.initialize_all_variables().run()
a_shuf = tf.random_shuffle(a)
print(a.eval())
print(a_shuf.eval())
sess.close()
If I set the seed explicitly: a_shuf = tf.random_shuffle(a, seed=42)
, the output is the same after each run. But why do I need to set the seed if I already call tf.set_random_seed(42)
?
The equivalent code using numpy just works:
import numpy as np
np.random.seed(42)
a = [1,2,3,4,5]
np.random.shuffle(a)
print(a)
That only sets the graph-level random seed. If you execute this snippet several times in a row, the graph will change, and two shuffle statements will get different operation-level seeds. The details are described in the doc string for set_random_seed
To get deterministic a_shuf
you can either
tf.reset_default_graph()
between invocations ora_shuf = tf.random_shuffle(a, seed=42)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With