I want to make a simple neural network which uses the ReLU function. Can someone give me a clue of how can I implement the function using numpy.
EXAMPLE 1: Define the Numpy relu function Here, we're using the def key word to define a new function, with the name “relu”. There's one input: x . The return value is the maximum of 0 and x, which we're computing with the Numpy maximum function.
Relu provides state of the art results and is computationally very efficient at the same time. The basic concept of Relu activation function is as follows: Return 0 if the input is negative otherwise return the input as it is. We can represent it mathematically as follows: Relu Function.
There are a couple of ways.
>>> x = np.random.random((3, 2)) - 0.5
>>> x
array([[-0.00590765, 0.18932873],
[-0.32396051, 0.25586596],
[ 0.22358098, 0.02217555]])
>>> np.maximum(x, 0)
array([[ 0. , 0.18932873],
[ 0. , 0.25586596],
[ 0.22358098, 0.02217555]])
>>> x * (x > 0)
array([[-0. , 0.18932873],
[-0. , 0.25586596],
[ 0.22358098, 0.02217555]])
>>> (abs(x) + x) / 2
array([[ 0. , 0.18932873],
[ 0. , 0.25586596],
[ 0.22358098, 0.02217555]])
If timing the results with the following code:
import numpy as np
x = np.random.random((5000, 5000)) - 0.5
print("max method:")
%timeit -n10 np.maximum(x, 0)
print("multiplication method:")
%timeit -n10 x * (x > 0)
print("abs method:")
%timeit -n10 (abs(x) + x) / 2
We get:
max method:
10 loops, best of 3: 239 ms per loop
multiplication method:
10 loops, best of 3: 145 ms per loop
abs method:
10 loops, best of 3: 288 ms per loop
So the multiplication seems to be the fastest.
You can do it in much easier way:
def ReLU(x):
return x * (x > 0)
def dReLU(x):
return 1. * (x > 0)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With