Keras fit_generator is very slow. The GPU is not used constantly in training, sometimes it's usage drops to 0%. Even on 4 workers and multiproceesing=True. 
Also, the processes of the script are requesting too much virtual memory and are with a D status, uninterruptible sleep (usually IO).
I already tried different combinations of max_queue_size but it didn't work.
Screenshot GPU Usage
Screenshot of Processes Virtual Memory and Status
Hardware Info GPU = Titan XP 12Gb
Code of Data Generator Class
import numpy as np
import keras
import conf
class DataGenerator(keras.utils.Sequence):
    'Generates data for Keras'
    def __init__(self, list_IDs, labels, batch_size=32, dim=(conf.max_file, 128),
                 n_classes=10, shuffle=True):
        'Initialization'
        self.dim = dim
        self.batch_size = batch_size
        self.labels = labels
        self.list_IDs = list_IDs
        self.n_classes = n_classes
        self.shuffle = shuffle
        self.on_epoch_end()
    def __len__(self):
        'Denotes the number of batches per epoch'
        return int(np.floor(len(self.list_IDs) / self.batch_size))
    def __getitem__(self, index):
        'Generate one batch of data'
        # Generate indexes of the batch
        indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
        # Find list of IDs
        list_IDs_temp = [self.list_IDs[k] for k in indexes]
        # Generate data
        X, y = self.__data_generation(list_IDs_temp)
        return X, y
    def on_epoch_end(self):
        'Updates indexes after each epoch'
        self.indexes = np.arange(len(self.list_IDs))
        if self.shuffle == True:
            np.random.shuffle(self.indexes)
    def __data_generation(self, list_IDs_temp):
        'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
        # Initialization
        X = np.empty((self.batch_size, *self.dim))
        y = np.empty((self.batch_size, conf.max_file, self.n_classes))
        # Generate data
        for i, ID in enumerate(list_IDs_temp):
            # Store sample
            X[i, ] = np.load(conf.dir_out_data+"data_by_file/" + ID)
            # Store class
            y[i, ] = np.load(conf.dir_out_data +
                            'data_by_file/' + self.labels[ID])
        return X, y
Code of python script
training_generator = DataGenerator(partition['train'], labels, **params)
validation_generator = DataGenerator(partition['validation'], labels, **params)
model.fit_generator(generator = training_generator,
                    validation_data = validation_generator,
                    epochs=steps,
                    callbacks=[tensorboard, checkpoint],
                    workers=4,
                    use_multiprocessing=True,
                    max_queue_size=50)
                If you are using Tensorflow 2.0, you might be hitting this bug: https://github.com/tensorflow/tensorflow/issues/33024
Work arounds are:
tf.compat.v1.disable_eager_execution() at the start of the codemodel.fit rather than model.fit_generator. The former supports generators anyway.Regardless of Tensorflow version, these principles apply:
There does seem to be an issue though with generators being slow in 1.13.2 and 2.0.1 (at least). https://github.com/keras-team/keras/issues/12683
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With