Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Theano HiddenLayer Activation Function

Is there anyway to use Rectified Linear Unit (ReLU) as the activation function of the hidden layer instead of tanh() or sigmoid() in Theano? The implementation of the hidden layer is as follows and as far as I have searched on the internet ReLU is not implemented inside the Theano.

class HiddenLayer(object):
  def __init__(self, rng, input, n_in, n_out, W=None, b=None, activation=T.tanh):
    pass
like image 504
A.M. Avatar asked Oct 21 '14 22:10

A.M.


2 Answers

relu is easy to do in Theano:

switch(x<0, 0, x)

To use it in your case make a python function that will implement relu and pass it to activation:

def relu(x):
    return theano.tensor.switch(x<0, 0, x)
HiddenLayer(..., activation=relu)

Some people use this implementation: x * (x > 0)

UPDATE: Newer Theano version have theano.tensor.nnet.relu(x) available.

like image 195
nouiz Avatar answered Nov 15 '22 19:11

nouiz


UPDATE: Latest version of theano has native support of ReLU: T.nnet.relu, which should be preferred over custom solutions.

I decided to compare the speed of solutions, since it is very important for NNs. Compared speed of function itself and it's gradient, in first case switch is preferred, the gradient is faster for x * (x>0). All the computed gradients are correct.

def relu1(x):
    return T.switch(x<0, 0, x)

def relu2(x):
    return T.maximum(x, 0)

def relu3(x):
    return x * (x > 0)


z = numpy.random.normal(size=[1000, 1000])
for f in [relu1, relu2, relu3]:
    x = theano.tensor.matrix()
    fun = theano.function([x], f(x))
    %timeit fun(z)
    assert numpy.all(fun(z) == numpy.where(z > 0, z, 0))

Output: (time to compute ReLU function)
>100 loops, best of 3: 3.09 ms per loop
>100 loops, best of 3: 8.47 ms per loop
>100 loops, best of 3: 7.87 ms per loop

for f in [relu1, relu2, relu3]:
    x = theano.tensor.matrix()
    fun = theano.function([x], theano.grad(T.sum(f(x)), x))
    %timeit fun(z)
    assert numpy.all(fun(z) == (z > 0)

Output: time to compute gradient 
>100 loops, best of 3: 8.3 ms per loop
>100 loops, best of 3: 7.46 ms per loop
>100 loops, best of 3: 5.74 ms per loop

Finally, let's compare to how gradient should be computed (the fastest way)

x = theano.tensor.matrix()
fun = theano.function([x], x > 0)
%timeit fun(z)
Output:
>100 loops, best of 3: 2.77 ms per loop

So theano generates inoptimal code for gradient. IMHO, switch version today should be preferred.

like image 27
Alleo Avatar answered Nov 15 '22 18:11

Alleo