Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using Gekko's brain module, how do I determine how many layers and what type of layer to use to solve a deep learning problem?

I am learning to use Gekko's brain module for deep learning applications.

I have been setting up a neural network to learn the numpy.cos() function and then produce similar results.

I get a good fit when the bounds on my training are:

x = np.linspace(0,2*np.pi,100)

But the model falls apart when I try to extend the bounds to:

x = np.linspace(0,3*np.pi,100)

What do I need to change in my neural network to increase the flexibility of my model so that it works for other bounds?

This is my code:

from gekko import brain
import numpy as np
import matplotlib.pyplot as plt

#Set up neural network 
b = brain.Brain()
b.input_layer(1)
b.layer(linear=2)
b.layer(tanh=2)
b.layer(linear=2)
b.output_layer(1)

#Train neural network
x = np.linspace(0,2*np.pi,100)
y = np.cos(x)
b.learn(x,y)

#Calculate using trained nueral network
xp = np.linspace(-2*np.pi,4*np.pi,100)
yp = b.think(xp)

#Plot results
plt.figure()
plt.plot(x,y,'bo')
plt.plot(xp,yp[0],'r-')
plt.show()

These are results to 2pi:

enter image description here

These are results to 3pi:

enter image description here

like image 553
Joseph Avatar asked Nov 21 '19 19:11

Joseph


2 Answers

I get the following result if i increase the nodes to 5 b.layer(tanh=5)

There are probably multiple answers to this question, tho. Maybe increasing the number of layers or changing the activation function. You can always use different solvers, too. Finding the best network architecture is an optimization problem of its own. Some people have tried to figure it out with genetic algorithms, for example:

https://arxiv.org/pdf/1808.03818.pdf

enter image description here

like image 183
reyPanda Avatar answered Oct 17 '22 07:10

reyPanda


Data-driven (Black-box) models inherently have "Overfitting" and "Underfitting" problems. If you give too many degrees of freedom in your model, it will perfectly fit into your "Training" data set, while it won't fit that well with the "Validation" data set.

Same for the Neural Net type of models. The more layers you give, the easier the model results in the "Overfitting."

There are several ways to avoid "Overfitting".

Generally, you can balance the "model errors" between the "Training set" and "Validation set," stopping increasing the layers at the point when the Validation Error starts increasing while the training error keeps decreasing, or vice versa.

like image 3
Junho Park Avatar answered Oct 17 '22 06:10

Junho Park