In the TensorFlow Functional API guide, there's an example shown where multiple models are created using the same graph of layers. (https://www.tensorflow.org/beta/guide/keras/functional#using_the_same_graph_of_layers_to_define_multiple_models)
encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.summary()
Is it possible to save and load these two models while still sharing the same graph? If I save and load them in the following way:
# Save
encoder.save('encoder.h5')
autoencoder.save('autoencoder.h5')
# Load
new_encoder = keras.models.load_model('encoder.h5')
new_autoencoder = keras.models.load_model('autoencoder.h5')
the new encoder and autoencoder will no longer share the same graph, and therefore no longer train together.
One new approach to saving and restoring a model in TensorFlow is to use the SavedModel, builder, and loader functionality. This actually wraps the Saver class in order to provide a higher-level serialization, which is more suitable for production purposes.
Using save_weights() method Now you can simply save the weights of all the layers using the save_weights() method. It saves the weights of the layers contained in the model. It is advised to use the save() method to save h5 models instead of save_weights() method for saving a model using tensorflow.
Now to save the weights only using the simple way, you just have to call the built-in function save_weights on your model. and train it for a few epochs. This will create a folder named weights_folder and save the weights in Tensorflow native format with the name of my_weights. It is going to have 3 files.
That is a cool question. The encoder and autoencoder no longer share the same graph because they are being saved as disjoint models. In fact, encoder
is being saved twice, as it is also embedded in autoencoder
.
To restore both models while still sharing the same graph, I would suggest the following approach:
Name the encoder
's output layer. For example:
encoder_output = layers.GlobalMaxPooling2D(name='encoder_output')(x)
Save only the autoencoder
:
autoencoder.save('autoencoder.h5')
Restore the autoencoder
:
new_autoencoder = keras.models.load_model('autoencoder.h5')
Reconstruct the encoder
's graph from the restored autoencoder
so that they share the common layers:
encoder_input = new_autoencoder.get_layer('img').input
encoder_output = new_autoencoder.get_layer('encoder_output').output
new_encoder = keras.Model(encoder_input, encoder_output)
Alternatively, you could also save/load the weights and reconstruct the graphs manually.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With