How can I change G_h1 = tf.nn.relu(tf.matmul(z, G_W1) + G_b1)
to leaky relu? I have tried looping over the tensor using max(value, 0,01*value)
but I get TypeError: Using a tf.Tensor as a Python bool is not allowed.
I also tried to find the source code on relu on Tensorflow github so that I can modify it to leaky relu but I couldn't find it..
According to the advantages of ReLU, LeakyReLU function is used to fix a part of the parameters to cope with the gradient death. PReLU parameters combined with PReLU are trained to construct a new CNN framework. Experimental results show the method is effective and feasibile.
The Leaky ReLu function is an improvisation of the regular ReLu function. To address the problem of zero gradient for negative value, Leaky ReLu gives an extremely small linear component of x to negative inputs. Mathematically: f(x)=1 (x<0)
A leaky ReLU is a rectified linear unit (ReLU) with a slope in the negative part of its input space. The motivation for leaky ReLUs is that vanilla ReLUs have a gradient of zero in the negative part of their input space which can harm learning. Corresponds to the Keras Leaky ReLU Layer .
You could write one based on tf.relu
, something like:
def lrelu(x, alpha):
return tf.nn.relu(x) - alpha * tf.nn.relu(-x)
EDIT
Tensorflow 1.4 now has a native tf.nn.leaky_relu
.
If alpha < 1 (it should be), you can use tf.maximum(x, alpha * x)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With