Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Add class information to Generator model in keras

I want to use condition GANs with the purpose of generated images for one domain (noted as domain A) and by having input images from a second domain (noted as domain B) and the class information as well. Both domains are linked with the same label information (every image of domain A is linked to an image to domain B and a specific label). My generator so far in Keras is the following:

def generator_model_v2():

   global BATCH_SIZE
   inputs = Input((IN_CH, img_cols, img_rows))   
   e1 = BatchNormalization(mode=0)(inputs)
   e2 = Flatten()(e1)
   e3 = BatchNormalization(mode=0)(e2)
   e4 = Dense(1024, activation="relu")(e3)
   e5 = BatchNormalization(mode=0)(e4)
   e6 = Dense(512, activation="relu")(e5)
   e7 = BatchNormalization(mode=0)(e6)
   e8 = Dense(512, activation="relu")(e7)
   e9 = BatchNormalization(mode=0)(e8)
   e10 = Dense(IN_CH * img_cols *img_rows, activation="relu")(e9)
   e11  = Reshape((3, 28, 28))(e10)
   e12 = BatchNormalization(mode=0)(e11)
   e13 = Activation('tanh')(e12)

   model = Model(input=inputs, output=e13)
   return model

So far my generator takes as input the images from the domain A (and the scope to output images from the domain B). I want somehow to input also the information of the class for the input domain A with the scope to produce images of the same class for the domain B. How can I add the label information after the flattening. So instead of having input size 1x1024 to have 1x1025 for example. Can I use a second Input for the class information in the Generator. And if yes how can I call then the generator from the training procedure of the GANs?

The training procedure:

discriminator_and_classifier_on_generator = generator_containing_discriminator_and_classifier(
    generator, discriminator, classifier)
generator.compile(loss=generator_l1_loss, optimizer=g_optim)
discriminator_and_classifier_on_generator.compile(
    loss=[generator_l1_loss, discriminator_on_generator_loss, "categorical_crossentropy"],
    optimizer="rmsprop")
discriminator.compile(loss=discriminator_loss, optimizer=d_optim) # rmsprop
classifier.compile(loss="categorical_crossentropy", optimizer=c_optim)

for epoch in range(30):
    for index in range(int(X_train.shape[0] / BATCH_SIZE)):
        image_batch = Y_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE]
        label_batch = LABEL_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE]  # replace with your data here
        generated_images = generator.predict(X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE])
        real_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], image_batch),axis=1)
        fake_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], generated_images), axis=1)
        X = np.concatenate((real_pairs, fake_pairs))
        y = np.concatenate((np.ones((100, 1, 64, 64)), np.zeros((100, 1, 64, 64))))
        d_loss = discriminator.train_on_batch(X, y)
        discriminator.trainable = False
        c_loss = classifier.train_on_batch(image_batch, label_batch)
        classifier.trainable = False
        g_loss = discriminator_and_classifier_on_generator.train_on_batch(
            X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], 
            [image_batch, np.ones((100, 1, 64, 64)), label_batch])
        discriminator.trainable = True
        classifier.trainable = True

The code is implementation of conditional dcgans (with the addition of a classifier over the discriminator). And the network's functions are:

def generator_containing_discriminator_and_classifier(generator, discriminator, classifier):
   inputs = Input((IN_CH, img_cols, img_rows))
   x_generator = generator(inputs)
   merged = merge([inputs, x_generator], mode='concat', concat_axis=1)
   discriminator.trainable = False
   x_discriminator = discriminator(merged)
   classifier.trainable = False
   x_classifier = classifier(x_generator)
   model = Model(input=inputs, output=[x_generator, x_discriminator, x_classifier])
   return model

def generator_containing_discriminator(generator, discriminator):
   inputs = Input((IN_CH, img_cols, img_rows))
   x_generator = generator(inputs)
   merged = merge([inputs, x_generator], mode='concat',concat_axis=1)
   discriminator.trainable = False
   x_discriminator = discriminator(merged)
   model = Model(input=inputs, output=[x_generator,x_discriminator])
   return model
like image 518
Jose Ramon Avatar asked Aug 23 '18 15:08

Jose Ramon


1 Answers

At first, following the suggestion which is given in Conditional Generative Adversarial Nets you have to define a second input. Then, just concatenate the two input vectors and process this concatenated vector.

def generator_model_v2():
    input_image = Input((IN_CH, img_cols, img_rows)) 
    input_conditional = Input((n_classes))  
    e0 = Flatten()(input_image) 
    e1 = Concatenate()([e0, input_conditional])   
    e2 = BatchNormalization(mode=0)(e1)
    e3 = BatchNormalization(mode=0)(e2)
    e4 = Dense(1024, activation="relu")(e3)
    e5 = BatchNormalization(mode=0)(e4)
    e6 = Dense(512, activation="relu")(e5)
    e7 = BatchNormalization(mode=0)(e6)
    e8 = Dense(512, activation="relu")(e7)
    e9 = BatchNormalization(mode=0)(e8)
    e10 = Dense(IN_CH * img_cols *img_rows, activation="relu")(e9)
    e11  = Reshape((3, 28, 28))(e10)
    e12 = BatchNormalization(mode=0)(e11)
    e13 = Activation('tanh')(e12)

    model = Model(input=[input_image, input_conditional] , output=e13)
    return model

Then, you need to pass the class labels during the training as well to the network:

classifier.train_on_batch((image_batch, class_batch), label_batch)
like image 188
zimmerrol Avatar answered Nov 15 '22 01:11

zimmerrol