Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

record the computation time for each epoch in Keras during model.fit()

I want to compare the computation time between different models. During the fit the computation time per epoch is printed to the console.

Epoch 5/5 160000/160000 [==============================] - **10s** ...... 

I'm looking for a way to store these times in a similar way to the model metrics that are saved in each epoch and avaliable through the history object.

like image 809
itamar kanter Avatar asked Apr 03 '17 07:04

itamar kanter


People also ask

What is epoch in model fit?

A number of epochs mean how many times you go through your training set. The model is updated each time a batch is processed, which means that it can be updated multiple times during one epoch. If batch_size is set equal to the length of x, then the model will be updated once per epoch. Hope this answer helps.

What is ETA in epoch?

ETA = Estimated Time of Arrival.


2 Answers

Try the following callback:

class TimeHistory(keras.callbacks.Callback):     def on_train_begin(self, logs={}):         self.times = []      def on_epoch_begin(self, batch, logs={}):         self.epoch_time_start = time.time()      def on_epoch_end(self, batch, logs={}):         self.times.append(time.time() - self.epoch_time_start) 

Then:

time_callback = TimeHistory() model.fit(..., callbacks=[..., time_callback],...) times = time_callback.times 

In this case times should store the epoch computation times.

like image 200
Marcin Możejko Avatar answered Sep 28 '22 09:09

Marcin Możejko


refer to answers of Marcin Możejko

import time  class TimeHistory(keras.callbacks.Callback):     def on_train_begin(self, logs={}):         self.times = []      def on_epoch_begin(self, epoch, logs={}):         self.epoch_time_start = time.time()      def on_epoch_end(self, epoch, logs={}):         self.times.append(time.time() - self.epoch_time_start) 

then

time_callback = TimeHistory() model.fit(..., callbacks=[..., time_callback],...) 

excution log

Train on 17000 samples, validate on 8000 samples Epoch 1/3 17000/17000 [==============================] - 5s 266us/step - loss: 36.7562 - mean_absolute_error: 4.5074 - val_loss: 34.2384 - val_mean_absolute_error: 4.3929 Epoch 2/3 17000/17000 [==============================] - 4s 253us/step - loss: 33.5529 - mean_absolute_error: 4.2956 - val_loss: 32.0291 - val_mean_absolute_error: 4.2484 Epoch 3/3 17000/17000 [==============================] - 5s 265us/step - loss: 31.0547 - mean_absolute_error: 4.1340 - val_loss: 30.6292 - val_mean_absolute_error: 4.1480 

then

print(time_callback.times) 

output

[4.531331300735474, 4.308278322219849, 4.505300283432007] 
like image 22
ryh Avatar answered Sep 28 '22 10:09

ryh