I know that you can reuse Keras layers. For eg I declare two layers for a decoder network:
decoder_layer_1 = Dense(intermediate_dim,activation='relu',name='decoder_layer_1')
decoder_layer_2 = Dense(intermediate_dim,activation='relu',name='decoder_layer_2')
Use in first model:
decoded = decoder_layer_1(z)
decoded = decoder_layer_2(decoded)
Use in second model:
_decoded = decoder_layer_1(decoder_input)
_decoded = decoder_layer_2(_decoded)
The above method is ok if I need to reuse only a couple of layers, cumbersome if I want to reuse a large number of layers (for eg. a decoder network with 10 layers). Is there a more efficient means to do it other than explicitly declaring each layer. Is there a means to implement it as shown below:
decoder_layers = group_of_layers()
Reuse in the first model:
decoded = group_of_layers(z)
Reuse in the second model:
_decoded = group_of_layers(decoder_input)
load_model('my_model') # New model using the same architecture, but without loading it new_model_bis = tf. keras. Model(inputs=[inputs], outputs=[outputs]) new_model_bis. compile(optimizer='ADAM', loss='mean_squared_error', metrics=['mae']) new_model.
Save Your Neural Network Model to JSON This can be saved to a file and later loaded via the model_from_json() function that will create a new model from the JSON specification. The weights are saved directly from the model using the save_weights() function and later loaded using the symmetrical load_weights() function.
I struggled with this problem too. What works for me is to wrap shared parts in a model, with its own input definition:
def group_of_layers(intermediate_dim):
shared_model_input = keras.layers.Input(shape=...)
shared_internal_layer = keras.layers.Dense(intermediate_dim, activation='relu', name='shared_internal_layer')(shared_model_input)
shared_model_output = keras.layers.Dense(intermediate_dim, activation='relu', name='shared_model_output')(shared_internal_layer)
return keras.models.Model(shared_model_input, shared_model_output)
In Functional API, you can use the shared model in the same way a single layer as long as the model's input layer matches shape of layers you apply to it:
group = group_of_layers(intermediate_dim)
result1 = group(previous_layer)
result2 = group(different_previous_layer)
The weights are going to be shared then.
This is nicely described in the documentation, see Shared vision model.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With