I'm very interested in GAN those times.
I coded one for MNIST with the following structure : Generator model Discriminator model Gen + Dis model
Generator model generate batches of image from random distribution. Discrimator is trained over it and real images. Then Discriminator is freeze in Gen+Dis model and Generator trained. (With the frozen Discriminator who says if the generator is good or not)
Now, imagine I don't want to feed my generator with a random distribution but with images. (For upscaling for example, or generate an real image from a draw)
Do I need to change something in it ? (Except the conv model who will be more complex) Should I continue to use the binary_crossentropy as loss function ?
Thanks you very much!
You can indeed put a variational autoencoder (VAE) in front in order to generate the initial distribution z (see paper).
If you are interested in the topic I can recommend the this course at Kadenze.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With