Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does pre-training improve classification in neural networks?

Many of the papers I have read so far have this mentioned "pre-training network could improve computational efficiency in terms of back-propagating errors", and could be achieved using RBMs or Autoencoders.

  1. If I have understood correctly, AutoEncoders work by learning the identity function, and if it has hidden units less than the size of input data, then it also does compression, BUT what does this even have anything to do with improving computational efficiency in propagating error signal backwards? Is it because the weights of the pre trained hidden units does not diverge much from its initial values?

  2. Assuming data scientists who are reading this would by theirselves know already that AutoEncoders take inputs as target values since they are learning identity function, which is regarded as unsupervised learning, but can such method be applied to Convolutional Neural Networks for which the first hidden layer is feature map? Each feature map is created by convolving a learned kernel with a receptive field in the image. This learned kernel, how could this be obtained by pre-training (unsupervised fashion)?

like image 557
VM_AI Avatar asked Dec 29 '15 16:12

VM_AI


People also ask

Why it is beneficial to use pre-trained models?

For image recognition tasks, using pre-trained models are great. For one, they are easier to use as they give you the architecture for “free.” Additionally, they typically have better results and typically require need less training.

What is pre-training in neural networks?

2. Pre-training. In simple terms, pre-training a neural network refers to first training a model on one task or dataset. Then using the parameters or model from this training to train another model on a different task or dataset. This gives the model a head-start instead of starting from scratch.

What is the advantage of using a pre-trained model in NLP?

Using pre-trained models allows you to achieve the same or even better performance much faster and with much less labeled data.


1 Answers

One thing to note is that autoencoders try to learn the non-trivial identify function, not the identify function itself. Otherwise they wouldn't have been useful at all. Well the pre-training helps moving the weight vectors towards a good starting point on the error surface. Then the backpropagation algorithm, which is basically doing gradient descent, is used improve upon those weights. Note that gradient descent gets stuck in the closes local minima.

enter image description here

[Ignore the term Global Minima in the image posted and think of it as another, better, local minima]

Intuitively speaking, suppose you are looking for an optimal path to get from origin A to destination B. Having a map with no routes shown on it (the errors you obtain at the last layer of the neural network model) kind of tells you where to to go. But you may put yourself in a route which has a lot of obstacles, up hills and down hills. Then suppose someone tells you about a route a a direction he has gone through before (the pre-training) and hands you a new map (the pre=training phase's starting point).

This could be an intuitive reason on why starting with random weights and immediately start to optimize the model with backpropagation may not necessarily help you achieve the performance you obtain with a pre-trained model. However, note that many models achieving state-of-the-art results do not use pre-training necessarily and they may use the backpropagation in combination with other optimization methods (e.g. adagrad, RMSProp, Momentum and ...) to hopefully avoid getting stuck in a bad local minima.

enter image description here

Here's the source for the second image.

like image 128
Amir Avatar answered Jun 05 '23 08:06

Amir