Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras: network doesn't train with fit_generator()

I'm using Keras on the large dataset (Music autotagging with MagnaTagATune dataset). So I've tried to use fit_generator() fuction with a custom data generator. But the value of loss function and metrics doesn't change during the training process. It looks like my network doesen't train at all.

When I use fit() function instead of fit_generator() everything is OK, but I can't keep the whole dataset in memory.

I've tried with both Theano and TensorFlow backends

Main code:

if __name__ == '__main__':
    model = models.FCN4()
    model.compile(optimizer='adam',
                  loss='binary_crossentropy',
                  metrics=['accuracy', 'categorical_accuracy', 'precision', 'recall'])
    gen = mttutils.generator_v2(csv_path, melgrams_dir)
    history = model.fit_generator(gen.generate(0,750),
                                  samples_per_epoch=750,
                                  nb_epoch=80,
                                  validation_data=gen.generate(750,1000,False),
                                  nb_val_samples=250)
    # RESULTS SAVING
    np.save(output_history, history.history)
    model.save(output_model)

class generator_v2:

genres = ['guitar', 'classical', 'slow', 'techno', 'strings', 'drums', 'electronic', 'rock', 'fast',
        'piano', 'ambient', 'beat', 'violin', 'vocal', 'synth', 'female', 'indian', 'opera', 'male', 'singing',
        'vocals', 'no vocals', 'harpsichord', 'loud', 'quiet', 'flute', 'woman', 'male vocal', 'no vocal',
        'pop', 'soft', 'sitar', 'solo', 'man', 'classic', 'choir', 'voice', 'new age', 'dance', 'male voice',
        'female vocal', 'beats', 'harp', 'cello', 'no voice', 'weird', 'country', 'metal', 'female voice', 'choral']

def __init__(self, csv_path, melgrams_dir):

    def get_dict_vals(dictionary, keys):
        vals = []
        for key in keys:
            vals.append(dictionary[key])
        return vals

    self.melgrams_dir = melgrams_dir
    with open(csv_path, newline='') as csvfile:
        reader = csv.DictReader(csvfile, dialect='excel-tab')
        self.labels = []
        for row in reader:
            labels_arr = np.array(get_dict_vals(
                row, self.genres)).astype(np.int)
            labels_arr = labels_arr.reshape((1, labels_arr.shape[0]))
            if (np.sum(labels_arr) > 0):
                self.labels.append((row['mp3_path'], labels_arr))
        self.size = len(self.labels)


def generate(self, begin, end):
    while(1):
        for count in range(begin, end):
            try:
                item = self.labels[count]
                mels = np.load(os.path.join(
                    self.melgrams_dir, item[0] + '.npy'))
                tags = item[1]
                yield((mels, tags))
            except FileNotFoundError:
                continue

To prepare arrays for fit() function I use this code:

def TEST_get_data_array(csv_path, melgrams_dir):
    gen = generator_v2(csv_path, melgrams_dir).generate(0,100)
    item = next(gen)
    x = np.array(item[0])
    y = np.array(item[1])
    for i in range(0,100):
        item = next(gen.training)
        x = np.concatenate((x,item[0]),axis = 0)
        y = np.concatenate((y,item[1]),axis = 0)
    return(x,y)

Sorry, if the style of my code is not good. And thank you!

UPD 1: I've tried to use return(X,y) instead of yield(X,y) but nothing changes.

Part of my new generator class:

def generate(self):  
    if((self.count < self.begin) or (self.count >= self.end)):
        self.count = self.begin
    item = self.labels[self.count]
    mels = np.load(os.path.join(self.melgrams_dir, item[0] + '.npy'))
    tags = item[1]
    self.count = self.count + 1
    return((mels, tags))

def __next__(self):   # fit_generator() uses this method
    return self.generate() 

fit_generator call:

history = model.fit_generator(tr_gen,
                              samples_per_epoch = tr_gen.size,
                              nb_epoch = 120,
                              validation_data = val_gen,
                              nb_val_samples = val_gen.size)

Logs:

Epoch 1/120
10554/10554 [==============================] - 545s - loss: 1.7240 - acc: 0.8922 
Epoch 2/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
Epoch 3/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
Epoch 4/120
10554/10554 [==============================] - 526s - loss: 1.8922 - acc: 0.8820 
... etc (loss is always 1.8922; acc is always 0.8820)
like image 415
Ladislao Avatar asked Oct 29 '22 12:10

Ladislao


1 Answers

I had the same problem as you with the yield method. So i just stored the current index and returned one batch per call with the return statement.

So I just used return (X, y) instead of yield (X,y) and it worked. I am not sure why this is. It would be cool if someone could shed a light on this.

Edit: You need to pass in the generator to the function not only call the function. Something like this:

model.fit_generator(gen, samples_per_epoch=750,
                                  nb_epoch=80,
                                  validation_data=gen,
                                  nb_val_samples=250)

Keras will call your __next__ function, while training on the data.

like image 198
Thomas Pinetz Avatar answered Nov 15 '22 06:11

Thomas Pinetz