Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras Model Accuracy differs after loading the same saved model

I trained a Keras Sequential Model and Loaded the same later. Both the model are giving different accuracy. I have came across a similar question but was not able solve the problem.

Sample Code : Loading and Traing the model

model = gensim.models.FastText.load('abc.simple') 
X,y = load_data()
Vectors = np.array(vectors(X)) 
X_train, X_test, y_train, y_test = train_test_split(Vectors, np.array(y), 
test_size = 0.3, random_state = 0)
X_train = X_train.reshape(X_train.shape[0],100,max_tokens,1) 

X_test = X_test.reshape(X_test.shape[0],100,max_tokens,1)
data for input to our model
print(X_train.shape)
model2 = train()

score = model2.evaluate(X_test, y_test, verbose=0)
print(score)

Training Accuracy is 90%. Saved the Model

# Saving Model
model_json = model2.to_json()
with open("model_architecture.json", "w") as json_file:
  json_file.write(model_json)
model2.save_weights("model_weights.h5")
print("Saved model to disk")

But after I restarted the kernel and just loaded the saved model and runned it on same set of data, accuracy got reduced.

#load json and create model
json_file = open('model_architecture.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)

#load weights into new model
loaded_model.load_weights("model_weights.h5")
print("Loaded model from disk")

# evaluate loaded model on test data
loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', 
metrics=['accuracy'])

score = loaded_model.evaluate(X_test, y_test, verbose=0)
print(score) 

Accuracy got reduced to 75% on the same set of data.

How to make it consistent ?

I have tried the following but of no help :

from keras.backend import manual_variable_initialization
manual_variable_initialization(True)

Even , I saved the whole model at once( weights and architecture) but was not able to solve this issue

like image 294
Abhay Raj Singh Avatar asked Feb 25 '18 23:02

Abhay Raj Singh


2 Answers

I had the same problem due to a silly mistake of mine - after loading the model I had in my data generator the shuffle option (useful for the training) turned to True instead of False. After changing it to False the model predicted as expected. It would be nice if keras could take care of this automatically. This is my critical code part:

pred_generator = pred_datagen.flow_from_directory(
    directory='./ims_dir',
    target_size=(100, 100),
    color_mode="rgb",
    batch_size=1,
    class_mode="categorical",
    shuffle=False,
)

model = load_model(logpath_ms)

pred=model.predict_generator(pred_generator, steps = N, verbose=1)
like image 192
NeStack Avatar answered Sep 23 '22 17:09

NeStack


Not sure, if your issue has been solved but for future comers. I had exactly the same problem with saving and loading the weights. So on loading the model the accuracy and loss were changed greatly from 68% accuracy to 2 %. In my experiment, I am using Tensorflow as backend with Keras model layers Embedding, LSTM and Dense. My issue got solved by fixing the seed for keras which uses NumPy random generator and since I am using Tensorflow as backend, I also fixed the seed for it. These are the lines I added at the top of my file where the model is also defined.

from numpy.random import seed seed(42)# keras seed fixing import tensorflow as tf tf.random.set_seed(42)# tensorflow seed fixing

I hope this helps. For more information have a look at this- https://machinelearningmastery.com/reproducible-results-neural-networks-keras/

like image 26
Rishabh Sahrawat Avatar answered Sep 24 '22 17:09

Rishabh Sahrawat