Would I be right in saying a neural network are good at finding 'good enough' solutions for a problem?
I'm thinking this because they don't provide a binary output for an given input but a probability, for example 0.67 could be an output.
I'm also guessing because they're often used for generalisation they're good at find solutions that often solve the problem but in some rare cases won't.
Thank you!
This is a very broad question. In general, neural networks with one hidden layer, a nonlinear activation function and a sufficient number of hidden neurons are able to approximate any function with arbitrary precision. However, the error function is not convex and thus the result of the training depends on the initialization.
SVMs are able to approximate any function, too. They are very popular because the optimization problem has a unique solution and there might be some other reasons. But recent research has shown that neural networks like multilayer perceptrons, convolutional neural networks, deep belief neural networks, multi-column deep neural networks etc. are more efficient and achieve better results for complex applications with a huge amount of data. So it is always a trade-off as LiKao stated (no free lunch theorem) and no classifier is "perfect".
Here is a paper that describes the advantages of deep networks in comparison to "shallow networks" which includes Support Vector Machines: http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf
Here is a standard benchmark and a comparison of different learning algorithms: http://yann.lecun.com/exdb/mnist/
Here is a paper that describes a new kind of neural networks that is especially good at solving some vision problems (traffic sign recognition, ocr): http://arxiv.org/abs/1202.2745
There is no easy answer to this question. The advantages/disadvantages of neural networks are a very complex topic. Here are some pointers:
No free lunch theorem: Roughly stated, this theorem proves that there is no "perfect" machine learning method. For every problem, for which a certain method is good, there is another problem for which the same method will fail horribly. The methods at which it fails may be solved by other methods quite easily however. This should always be considered, when doing any machine learning.
Neural networks are quite simple to implement (you do not need a good linear algebra solver as for examples for SVNs).
The VC dimension of neural networks is unclear. This is very important when you want to consider how good a solution might be.
Neural networks cannot be retrained. If you add data later, this is almost impossible to add to an existing network.
Neural networks often exhibit patterns similar to those exhibited by humans. However this is more of interest in cognitive sciences than for practical examples.
Handling of time series data in neural networks is a very complicated topic.
This is all I can think of for now. Maybe other can add more.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With