Recently I read a paper and they cited these two problems when training GANs. I know about mode collapsing, where the generator produces a limited varieties of samples, however I did not find a good explanation about mode dropping.
Does anyone have a good answer?
The paper is the following: An empirical study on evaluation metrics of generative adversarial networks
Mode dropping happens when the Generator is having some trouble to learn some details from the Evaluator (adversarial network).
In order to 'learn' these details, the correction make the node weights from the generator to drop close to 0 in order learn these details, while the overall generated image quality is greatly reduced.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With