Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to reproduce results when using dropout regularization in Tensorflow

I am training a neural network using dropout regularization. I save the weights and biases the network is initialized with, so that I can repeat the experiment when I get good results.

However, the use of dropout introduces some randomness in the network: since dropout drops units randomly, each time I rerun the network, different units are being dropped - even though I initialize the network with the exact same weights and biases (if I understand this correctly).

Is there a way to make the dropout deterministic?

like image 827
rdv Avatar asked Nov 04 '25 11:11

rdv


1 Answers

There are two primary ways to perform dropout in tensorflow:

  • tf.nn.dropout (low-level)
  • tf.layers.dropout (high-level, uses tf.nn.dropout under the hood)

Both functions accept a seed parameter that is used to generate the random mask. By default, seed=None, which means random seed, i.e. non-deterministic. In order to make the result deterministic, you either set the seed on per-op level or call tf.set_random_seed (sets the the graph-level random seed) or, better, both.

Example:

import tensorflow as tf

tf.InteractiveSession()
tf.set_random_seed(0)

x = tf.ones([10])
y = tf.nn.dropout(x, keep_prob=0.5, seed=0)
for i in range(5):
  print(y.eval())

z = tf.layers.dropout(inputs=x, rate=0.5, training=True, seed=0)
for i in range(5):
  print(z.eval())

Caveat: in general, there are other sources in randomness in the training scripts, so you have to set also pure python seed (random.seed) and numpy seed (numpy.random.seed).

like image 185
Maxim Avatar answered Nov 07 '25 00:11

Maxim



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!