As mentioned I'm trying to normalize my dataset before training my model. I was using tf.keras.preprocessing.image.ImageDataGenerator to do this previously.
train_data = tf.cast(train_data, tf.float32)
train_gen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True
)
train_gen.fit(train_data)
train_generator = train_gen.flow(train_data, train_labels,
batch_size=batch_size,
shuffle=True)
model.fit(train_generator, epochs=base_epochs)
However, I had to give it up because I implemented a complicated loss function using a custom layer. Therefore data and labels are required to be sent to the model separately as inputs. Is there any other function provided in Tensorflow Keras to normalize my samples?
def standardize(image_data):
image_data -= np.mean(image_data, axis=0)
image_data /= np.std(image_data, axis=0)
return image_data
It's an easy method to solve the problem. Preprocessing the data myself.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With