I'm trying to use the dropout functionality in tensorflow:
sess=tf.InteractiveSession()
initial = tf.truncated_normal([1,4], stddev=0.1)
x = tf.Variable(initial)
keep_prob = tf.placeholder("float")
dx = tf.nn.dropout(x, keep_prob)
sess.run(tf.initialize_all_variables())
sess.run(dx, feed_dict={keep_prob: 0.5})
sess.close()
This example is very similar to how it's done in the tutorial; however, I end up with the following error:
RuntimeError: min: Conversion function <function constant at 0x7efcc6e1ec80> for type <type 'object'> returned incompatible dtype: requested = float32_ref, actual = float32
I have some trouble to understand the dtype float32_ref
, which seems to be the background to the problem. I've also tried to specify dtype=tf.float32
, but that doesn't fix anything.
I also tried this Example, which works fine with float32
:
sess=tf.Session()
x=tf.Variable(np.array([1.0,2.0,3.0,4.0]))
sess.run(x.initializer)
x=tf.cast(x,tf.float32)
prob=tf.Variable(np.array([0.5]))
sess.run(prob.initializer)
prob=tf.cast(prob,tf.float32)
dx=tf.nn.dropout(x,prob)
sess.run(dx)
sess.close()
However, if I cast float64
instead of float32
I get the same error:
RuntimeError: min: Conversion function <function constant at 0x7efcc6e1ec80> for type <type 'object'> returned incompatible dtype: requested = float64_ref, actual = float64
Edit:
It seems like this problem only arises when using dropout directly on Variables, works for placeholders and for products of Variables and placeholders, Example:
sess=tf.InteractiveSession()
x = tf.placeholder(tf.float64)
sess=tf.InteractiveSession()
initial = tf.truncated_normal([1,5], stddev=0.1,dtype=tf.float64)
y = tf.Variable(initial)
keep_prob = tf.placeholder(tf.float64)
dx = tf.nn.dropout(x*y, keep_prob)
sess.run(tf.initialize_all_variables())
sess.run(dx, feed_dict={x : np.array([1.0, 2.0, 3.0, 4.0, 5.0]),keep_prob: 0.5})
sess.close()
The Dropout layer randomly sets input units to 0 with a frequency of rate at each step during training time, which helps prevent overfitting. Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over all inputs is unchanged.
DropoutNeurons(20), - tf. keras.
By admin | December 2, 2020. tf. nn. dropout() allows us to create a dropout layer for tensorflow model. In this tutorial, we will introduce how to use it.
This is a bug in the implementation of tf.nn.dropout
that was fixed in a recent commit, and will be included in the next release of TensorFlow. For now, to avoid the issue, either build TensorFlow from source, or modify your program as follows:
#dx = tf.nn.dropout(x, keep_prob)
dx = tf.nn.dropout(tf.identity(x), keep_prob)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With