I have a simple keras model. After the model is saved. I am unable to load the model. This is the error I get after instantiating the model and trying to load weights:
Using TensorFlow backend.
Traceback (most recent call last):
File "test.py", line 4, in <module>
model = load_model("test.h5")
File "/usr/lib/python3.7/site-packages/keras/engine/saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "/usr/lib/python3.7/site-packages/keras/engine/saving.py", line 258, in _deserialize_model
.format(len(layer_names), len(filtered_layers))
ValueError: You are trying to load a weight file containing 6 layers into a model with 0 layers
For instantiating the model and using model.load_weights and doing a model summary. I get None when I print the model using print(model)
Traceback (most recent call last):
File "test.py", line 7, in <module>
print(model.summary())
AttributeError: 'NoneType' object has no attribute 'summary'
Here is my Network:
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, InputLayer, Flatten, Dense, BatchNormalization
def create_model():
kernel_size = 5
pool_size = 2
batchsize = 64
model = Sequential()
model.add(InputLayer((36, 120, 1)))
model.add(Conv2D(filters=20, kernel_size=kernel_size, activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size))
model.add(Conv2D(filters=50, kernel_size=kernel_size, activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size))
model.add(Flatten())
model.add(Dense(120, activation='relu'))
model.add(Dense(2, activation='relu'))
return model
Training procedure script:
import numpy as np
from keras import optimizers
from keras import losses
from sklearn.model_selection import train_test_split
from model import create_model
def data_loader(images, pos):
while(True):
for i in range(0, images.shape[0], 64):
if (i+64) < images.shape[0]:
img_batch = images[i:i+64]
pos_batch = pos[i:i+64]
yield img_batch, pos_batch
else:
img_batch = images[i:]
pos_batch = pos[i:]
yield img_batch, pos_batch
def main():
model = create_model()
sgd = optimizers.Adadelta(lr=0.01, rho=0.95, epsilon=None, decay=0.0)
model.compile(loss=losses.mean_squared_error, optimizer=sgd)
print("traning")
data = np.load("data.npz")
images = data['images']
pos = data['pos']
x_train, x_test, y_train, y_test = train_test_split(images, pos, test_size=0.33, random_state=42)
model.fit_generator(data_loader(x_train, y_train), steps_per_epoch=x_train.shape[0]//64, validation_data=data_loader(x_test, y_test), \
validation_steps = x_test.shape[0]//64, epochs=1)
model.save('test.h5')
model.save_weights('test_weights.h5')
print("training done")
if __name__ == '__main__':
main()
Save Your Neural Network Model to JSON This can be saved to a file and later loaded via the model_from_json() function that will create a new model from the JSON specification. The weights are saved directly from the model using the save_weights() function and later loaded using the symmetrical load_weights() function.
Call tf. keras. Model. save to save a model's architecture, weights, and training configuration in a single file/folder .
Drop InputLayer
and use input_shape
in first layer. Your code will be similar to:
model = Sequentional()
model.add(Conv2D(filters=20,..., input_shape=(36, 120, 1)))
It seems models with InputLayer
are not serialized to HDF5
correctly.
Upgrade your Tensorflow and Keras to the latest version
Fix the interpreter problem as explained here
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With