I was training my model with epoch=10. I again retrained with epoch=3. and again epoch 5. so for every time i train model with epoch=10, 3, 5. I want to combine the history of all the 3. FOr example, let h1 = history of model.fit for epoch=10, h2 = history of model.fit for epoch=3, h3 = history of model.fit for epoch=5.
Now in variable h, i want h1 + h2 + h3. All history to be appended to single variable so that i can plot some graphs.
the code is,
start_time = time.time()
model.fit(x=X_train, y=y_train, batch_size=32, epochs=10, validation_data=(X_val, y_val), callbacks=[tensorboard, checkpoint])
end_time = time.time()
execution_time = (end_time - start_time)
print(f"Elapsed time: {hms_string(execution_time)}")
start_time = time.time()
model.fit(x=X_train, y=y_train, batch_size=32, epochs=3, validation_data=(X_val, y_val), callbacks=[tensorboard, checkpoint])
end_time = time.time()
execution_time = (end_time - start_time)
print(f"Elapsed time: {hms_string(execution_time)}")
start_time = time.time()
model.fit(x=X_train, y=y_train, batch_size=32, epochs=5, validation_data=(X_val, y_val), callbacks=[tensorboard, checkpoint])
end_time = time.time()
execution_time = (end_time - start_time)
print(f"Elapsed time: {hms_string(execution_time)}")
We can use the Keras callback keras. callbacks. ModelCheckpoint() to save the model at its best performing epoch.
The number of epochs is a hyperparameter that defines the number times that the learning algorithm will work through the entire training dataset. One epoch means that each sample in the training dataset has had an opportunity to update the internal model parameters.
The weights are saved directly from the model using the save_weights() function and later loaded using the symmetrical load_weights() function. The example below trains and evaluates a simple model on the Pima Indians dataset. The model is then converted to JSON format and written to model. json in the local directory.
Call tf. keras. Model. save to save a model's architecture, weights, and training configuration in a single file/folder .
You can achieve this functionality by creating a class which sub-classes tf.keras.callbacks.Callback
and use the object of that class as callback to model.fit
.
import csv
import tensorflow.keras.backend as K
from tensorflow import keras
import os
model_directory='./xyz' # directory to save model history after every epoch
class StoreModelHistory(keras.callbacks.Callback):
def on_epoch_end(self,batch,logs=None):
if ('lr' not in logs.keys()):
logs.setdefault('lr',0)
logs['lr'] = K.get_value(self.model.optimizer.lr)
if not ('model_history.csv' in os.listdir(model_directory)):
with open(model_directory+'model_history.csv','a') as f:
y=csv.DictWriter(f,logs.keys())
y.writeheader()
with open(model_directory+'model_history.csv','a') as f:
y=csv.DictWriter(f,logs.keys())
y.writerow(logs)
model.fit(...,callbacks=[StoreModelHistory()])
Then you can load the csv file and plot model's loss, learning rate, metrics, etc.
import pandas as pd
import matplotlib.pyplot as plt
EPOCH = 10 # number of epochs the model has trained for
history_dataframe = pd.read_csv(model_directory+'model_history.csv',sep=',')
# Plot training & validation loss values
plt.style.use("ggplot")
plt.plot(range(1,EPOCH+1),
history_dataframe['loss'])
plt.plot(range(1,EPOCH+1),
history_dataframe['val_loss'],
linestyle='--')
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
Every time you call model.fit()
, it returns a keras.callbacks.History
object whose history
attribute contains a dictionary. The keys to the dictionary are loss for training, val_loss for validation loss, and any other metrics that you might have set while compiling.
Therefore, in your case, you could do:
hist1 = model.fit(...)
# other code lines
hist2 = model.fit(...)
# other code lines
hist3 = model.fit(...)
# create an empty dict to save all three history dicts into
total_history_dict = dict()
for some_key in hist1.keys():
current_values = [] # to save values from all three hist dicts
for hist_dict in [hist1.history, hist2.history, hist3.history]:
current_values += hist_dict[some_key]
total_history_dict[some_key] = current_values
Now, total_history_dict
is a dictionary keys of which are as usual loss, val_loss, other metrics and values lists showing loss/metrics for each epoch. (Length of the list would be sum of number of epochs in all three calls to model.fit)
You could now use the dictionary to plot things using matplotlib
or save it to a pandas
dataframe, etc...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With