Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

keras val very slow when use model.fit_generator

When I use my dataset to turn base on Resnet-50 in Keras(backend is tensorflow), I find it very odd that when after each epoch, val is slower than train. I don't know why, is it because my GPU do not have enough memory? My GPU is K2200, which has 4 GB memory. Am I misunderstanding the paras' meaning ?

I have 35946 train pic so I use:

samples_per_epoch=35946,

I have 8986 val pic so I use:

 nb_val_samples=8986,

The following is part of my code:

train_datagen = ImageDataGenerator(
    rescale=1./255,
    featurewise_center=False,  # set input mean to 0 over the dataset
    samplewise_center=False,  # set each sample mean to 0
    featurewise_std_normalization=False,  # divide inputs by std of the dataset
    samplewise_std_normalization=False,  # divide each input by its std
    zca_whitening=False,  # apply ZCA whitening
    rotation_range=20,  # randomly rotate images in the range (degrees, 0 to 180)
    width_shift_range=0.1,  # randomly shift images horizontally (fraction of total width)
    height_shift_range=0.1,  # randomly shift images vertically (fraction of total height)
    horizontal_flip=True,  # randomly flip images
    vertical_flip=False,
    zoom_range=0.1,
    channel_shift_range=0.,
    fill_mode='nearest',
    cval=0.,

)
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    'data/train',
    batch_size=batch_size,
    class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
    'data/val',
    batch_size=batch_size,
    class_mode='categorical')
model.fit_generator(train_generator,
                    # steps_per_epoch=X_train.shape[0] // batch_size,
                    samples_per_epoch=35946,
                    epochs=epochs,
                    validation_data=validation_generator,
                    verbose=1,
                    nb_val_samples=8986,
                    callbacks=[earlyStopping,saveBestModel,tensorboard])
like image 507
Yanning Zhou Avatar asked Apr 01 '17 10:04

Yanning Zhou


People also ask

Is Model Fit_generator deprecated?

Update July 2021: For TensorFlow 2.2+ users, just use the . fit method for your projects. The . fit_generator method will be deprecated in future releases of TensorFlow as the .

What is the difference between Fit_generator and fit?

fit is used when the entire training dataset can fit into the memory and no data augmentation is applied. . fit_generator is used when either we have a huge dataset to fit into our memory or when data augmentation needs to be applied.


1 Answers

@Yanning As you have mentioned in your comment itself the first epoch is slow because the ImageDataGenerator is reading data from disk to RAM. This part is very slow. Once the data has been read into RAM it just the matter of reading and transferring data from RAM to GPU.

Therefore if your dataset is not huge and can fit into your RAM, you can try to make a single numpy file out of all the dataset and read this data in the beginning. This will save a lot of disk seek time.

Please checkout this post to get some comparison between time taken for different operations:

Latency Numbers Every Programmer Should Know

Latency Comparison Numbers

Main memory reference                         100 ns
Read 1 MB sequentially from memory        250,000 ns 
Read 1 MB sequentially from SSD         1,000,000 ns
Read 1 MB sequentially from disk       20,000,000 ns
like image 134
Pranjal Sahu Avatar answered Oct 07 '22 17:10

Pranjal Sahu