Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Neural Networks - Difference between deep autoencoder and stacked autoencoder [closed]

Disclaimer: I also posted this question on CrossValidated but it is not receiving any attention. If this is not the place for it I will gladly remove it.

As I understand it, the only difference between them is the way the two networks are trained. Deep autoencoders are trained in the same way as a single-layer neural network, while stacked autoencoders are trained with a greedy, layer-wise approach. Hugo Larochelle confirms this in the comment of this video. I wonder if this is the ONLY difference, any pointers?

like image 743
RiccB Avatar asked Mar 15 '18 10:03

RiccB


People also ask

What is the difference between deep autoencoders and stacked autoencoder?

Deep autoencoders are trained in the same way as a single-layer neural network, while stacked autoencoders are trained with a greedy, layer-wise approach. Hugo Larochelle confirms this in the comment of this video.

Can autoencoders have multiple layers?

Just like other neural networks we have discussed, autoencoders can have multiple hidden layers. In this case they are called stacked autoencoders (or deep autoencoders). Later on, the author discusses two methods of training an autoencoder and uses both terms interchangeably.

What is the difference between encoding and decoding in autoencoders?

The process of going from the first layer to the hidden layer is called encoding. The process of going from the hidden layer to the output layer is called decoding. The process of encoding and decoding is what makes autoencoders special. The yellow layer is sometimes known as the bottleneck hidden layer.

What does an autoencoder look like?

The basic type of an autoencoder looks like the one above. It consists of an input layer (the first layer), a hidden layer (the yellow layer), and an output layer (the last layer). The objective of the network is for the output layer to be exactly the same as the input layer.


1 Answers

The terminology in the field isn't fixed, well-cut and clearly defined and different researches can mean different things or add different aspects to the same terms. Example discussions:

  • What is the difference between Deep Learning and traditional Artificial Neural Network machine learning? (some people think that 2 layers is deep enough, some mean 10+ or 100+ layers).

  • Multi-layer perceptron vs deep neural network (mostly synonyms but there are researches that prefer one vs the other).

As for AE, according to various sources, deep autoencoder and stacked autoencoder are exact synonyms, e.g., here's a quote from "Hands-On Machine Learning with Scikit-Learn and TensorFlow":

Just like other neural networks we have discussed, autoencoders can have multiple hidden layers. In this case they are called stacked autoencoders (or deep autoencoders).

Later on, the author discusses two methods of training an autoencoder and uses both terms interchangeably.

I would agree that the perception of the term "stacked" is that an autoencoder can extended with new layers without retraining, but this is actually true regardless of how existing layers have been trained (jointly or separately). Also regardless of the training method, the researches may or may not call it deep enough. So I wouldn't focus too much on terminology. It can stabilize some day but not right now.

like image 197
Maxim Avatar answered Oct 30 '22 09:10

Maxim