Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can neural networks approximate any function given enough hidden neurons?

I understand neural networks with any number of hidden layers can approximate nonlinear functions, however, can it approximate:

f(x) = x^2

I can't think of how it could. It seems like a very obvious limitation of neural networks that can potentially limit what it can do. For example, because of this limitation, neural networks probably can't properly approximate many functions used in statistics like Exponential Moving Average, or even variance.

Speaking of moving average, can recurrent neural networks properly approximate that? I understand how a feedforward neural network or even a single linear neuron can output a moving average using the sliding window technique, but how would recurrent neural networks do it without X amount of hidden layers (X being the moving average size)?

Also, let us assume we don't know the original function f, which happens to get the average of the last 500 inputs, and then output a 1 if it's higher than 3, and 0 if it's not. But for a second, pretend we don't know that, it's a black box.

How would a recurrent neural network approximate that? We would first need to know how many timesteps it should have, which we don't. Perhaps a LSTM network could, but even then, what if it's not a simple moving average, it's an exponential moving average? I don't think even LSTM can do it.

Even worse still, what if f(x,x1) that we are trying to learn is simply

f(x,x1) = x * x1

That seems very simple and straightforward. Can a neural network learn it? I don't see how.

Am I missing something huge here or are machine learning algorithms extremely limited? Are there other learning techniques besides neural networks that can actually do any of this?

like image 313
Essam Al-Mansouri Avatar asked Sep 01 '14 15:09

Essam Al-Mansouri


People also ask

Can neural networks approximate any function?

So why do we like using neural networks for function approximation? The reason is that they are a universal approximator. In theory, they can be used to approximate any function.

How many hidden layers are necessary for a neural network to be able to represent any continuous function?

Jeff Heaton (see page 158 of the linked text), who states that one hidden layer allows a neural network to approximate any function involving “a continuous mapping from one finite space to another.” With two hidden layers, the network is able to “represent an arbitrary decision boundary to arbitrary accuracy.”

Can a neural network reach 100% accuracy?

If your neural network got the line right, it is possible it can have a 100% accuracy. Remember that a neuron's output (before it goes through an activation function) is a linear combination of its inputs so this is a pattern that a network consisting of a single neuron can learn.

How many hidden layers and neurons do you need in your artificial neural network?

The number of hidden neurons should be 2/3 the size of the input layer, plus the size of the output layer. The number of hidden neurons should be less than twice the size of the input layer.


Video Answer


3 Answers

The key point to understand is compact:

Neural networks (as any other approximation structure like, polynomials, splines, or Radial Basis Functions) can approximate any continuous function only within a compact set.

In other words the theory states that, given:

  1. A continuous function f(x),
  2. A finite range for the input x, [a,b], and
  3. A desired approximation accuracy ε>0,

then there exists a neural network that approximates f(x) with an approximation error less than ε, everywhere within [a,b].

Regarding your example of f(x) = x2, yes you can approximate it with a neural network within any finite range: [-1,1], [0, 1000], etc. To visualise this, imagine that you approximate f(x) within [-1,1] with a Step Function. Can you do it on paper? Note that if you make the steps narrow enough you can achieve any desired accuracy. The way neural networks approximate f(x) is not much different than this.

But again, there is no neural network (or any other approximation structure) with a finite number of parameters that can approximate f(x) = x2 for all x in [-∞, +∞].

like image 63
Panagiotis Panagi Avatar answered Oct 19 '22 00:10

Panagiotis Panagi


The question is very legitimate and unfortunately many of the answers show how little practitioners seem to know about the theory of neural networks. The only rigorous theorem that exists about the ability of neural networks to approximate different kinds of functions is the Universal Approximation Theorem.

The UAT states that any continuous function on a compact domain can be approximated by a neural network with only one hidden layer provided the activation functions used are BOUNDED, continuous and monotonically increasing. Now, a finite sum of bounded functions is bounded by definition.

A polynomial is not bounded so the best we can do is provide a neural network approximation of that polynomial over a compact subset of R^n. Outside of this compact subset, the approximation will fail miserably as the polynomial will grow without bound. In other words, the neural network will work well on the training set but will not generalize!

The question is neither off-topic nor does it represent the OP's opinion.

like image 29
Tarek Nassar Avatar answered Oct 18 '22 23:10

Tarek Nassar


I am not sure why there is such a visceral reaction, I think it is a legitimate question that is hard to find by googling it, even though I think it is widely appreciated and repeated outloud. I think in this case you are looking for the actually citations showing that a neural net can approximate any function. This recent paper explains it nicely, in my opinion. They also cite the original paper by Barron from 1993 that proved a less general result. The conclusion: a two-layer neural network can represent any bounded degree polynomial, under certain (seemingly non-restrictive) conditions.

Just in case the link does not work, it is called "Learning Polynomials with Neural Networks" by Andoni et al., 2014.

like image 22
Martha White Avatar answered Oct 18 '22 22:10

Martha White