I train a model A
and try to use the output of the intermediate layer with the name="layer_x"
as an additional input for model B
.
I tried to use the output of the intermediate layer like on the Keras doc https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer.
Model A:
inputs = Input(shape=(100,))
dnn = Dense(1024, activation='relu')(inputs)
dnn = Dense(128, activation='relu', name="layer_x")(dnn)
dnn = Dense(1024, activation='relu')(dnn)
output = Dense(10, activation='softmax')(dnn)
Model B:
input_1 = Input(shape=(200,))
input_2 = Input(shape=(100,)) # input for model A
# loading model A
model_a = keras.models.load_model(path_to_saved_model_a)
intermediate_layer_model = Model(inputs=model_a.input,
outputs=model_a.get_layer("layer_x").output)
intermediate_output = intermediate_layer_model.predict(data)
merge_layer = concatenate([input_1, intermediate_output])
dnn_layer = Dense(512, activation="relu")(merge_layer)
output = Dense(5, activation="sigmoid")(dnn_layer)
model = keras.models.Model(inputs=[input_1, input_2], outputs=output)
When I debug I get an error on this line:
intermediate_layer_model = Model(inputs=model_a.input,
outputs=model_a.get_layer("layer_x").output)
File "..", line 89, in set_model
outputs=self.neural_net_asc.model.get_layer("layer_x").output)
File "C:\WinPython\python-3.5.3.amd64\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "C:\WinPython\python-3.5.3.amd64\lib\site-packages\keras\engine\topology.py", line 1592, in __init__
mask = node.output_masks[tensor_index]
AttributeError: 'Node' object has no attribute 'output_masks'
I can access the tensor with get_layer("layer_x").output
and the output_mask
is None
. Do I have to set manually an output mask and how do I set up this output mask if needed?
The key is to first do .get_layer on the Model object, then do another .get_layer on that specifying the specific layer, THEN do .output: layer_output = model.get_layer ('Your-Model-Object').get_layer ('the-layer-contained-in-your-Model-object').output
According to the documentation, a layer's output can be extracted like this: layer_name = 'my_layer' intermediate_layer_model = Model (inputs=model.input, outputs=model.get_layer (layer_name).output) intermediate_output = intermediate_layer_model.predict (data)
Visualize what the Output looks like at the intermediate layer, Look at its Weight, Count Params, and Look at the layer Summary. We will actually be visualizing the result after each Activation Layer. In case if you don’t have the following modules, we need to install the following Libraries by typing in the Python virtual environment:
@stillbreeze If you have multiple input, you can see the order of your input layer when you use model.summary () to print the structure of your network. Then you just need to pick up the order number of the input layer and use them in your code. to get the intermediate layers output.
There are two things that you seem to be doing wrong :
intermediate_output = intermediate_layer_model.predict(data)
when you do .predict()
, you are actually passing data through the graph and asking what will be the result. When you do that, intermediate_output
will be a numpy array and not a layer as you would like it to be.
Secondly, you don't need to recreate a new intermediate model. You can directly use the part of model_a
that interest you.
Here is a code that "compiles" for me :
from keras.layers import Input, Dense, concatenate
from keras.models import Model
inputs = Input(shape=(100,))
dnn = Dense(1024, activation='relu')(inputs)
dnn = Dense(128, activation='relu', name="layer_x")(dnn)
dnn = Dense(1024, activation='relu')(dnn)
output = Dense(10, activation='softmax')(dnn)
model_a = Model(inputs=inputs, outputs=output)
# You don't need to recreate an input for the model_a,
# it already has one and you can reuse it
input_b = Input(shape=(200,))
# Here you get the layer that interests you from model_a,
# it is still linked to its input layer, you just need to remember it for later
intermediate_from_a = model_a.get_layer("layer_x").output
# Since intermediate_from_a is a layer, you can concatenate it with the other input
merge_layer = concatenate([input_b, intermediate_from_a])
dnn_layer = Dense(512, activation="relu")(merge_layer)
output_b = Dense(5, activation="sigmoid")(dnn_layer)
# Here you remember that one input is input_b and the other one is from model_a
model_b = Model(inputs=[input_b, model_a.input], outputs=output_b)
I hope this is what you wanted to do.
Please tell me if something isn't clear :-)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With