I a trying to merge 2 sequential models in keras. Here is the code:
model1 = Sequential(layers=[
# input layers and convolutional layers
Conv1D(128, kernel_size=12, strides=4, padding='valid', activation='relu', input_shape=input_shape),
MaxPooling1D(pool_size=6),
Conv1D(256, kernel_size=12, strides=4, padding='valid', activation='relu'),
MaxPooling1D(pool_size=6),
Dropout(.5),
])
model2 = Sequential(layers=[
# input layers and convolutional layers
Conv1D(128, kernel_size=20, strides=5, padding='valid', activation='relu', input_shape=input_shape),
MaxPooling1D(pool_size=5),
Conv1D(256, kernel_size=20, strides=5, padding='valid', activation='relu'),
MaxPooling1D(pool_size=5),
Dropout(.5),
])
model = merge([model1, model2], mode = 'sum')
Flatten(),
Dense(256, activation='relu'),
Dropout(.5),
Dense(128, activation='relu'),
Dropout(.35),
# output layer
Dense(5, activation='softmax')
return model
Here is the error log:
File "/nics/d/home/dsawant/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 392, in is_keras_tensor raise ValueError('Unexpectedly found an instance of type
' + str(type(x)) + '
. ' ValueError: Unexpectedly found an instance of type<class 'keras.models.Sequential'>
. Expected a symbolic tensor instance.
Some more log:
ValueError: Layer merge_1 was called with an input that isn't a symbolic tensor. Received type: class 'keras.models.Sequential'. Full input: [keras.models.Sequential object at 0x2b32d518a780, keras.models.Sequential object at 0x2b32d521ee80]. All inputs to the layer should be tensors.
How can I merge these 2 Sequential models that use different window sizes and apply functions like 'max', 'sum' etc to them?
The most common method to combine models is by averaging multiple models, where taking a weighted average improves the accuracy. Bagging, boosting, and concatenation are other methods used to combine deep learning models. Stacked ensemble learning uses different combining techniques to build a model.
The process of blending is explained below: Train dataset is divided into training and validation sets. The model is fitted on to the training set. Predictions are made on the validation set and the test set. Now the validation set and the predictions are used as features to build a new model.
Using the functional API brings you all possibilities.
When using the functional API, you need to keep track of inputs and outputs, instead of just defining layers.
You define a layer, then you call the layer with an input tensor to get the output tensor. Models and layers can be called exactly the same way.
For the merge layer, I prefer using other merge layers that are more intuitive, such as Add()
, Multiply()
and Concatenate()
for instance.
from keras.layers import *
mergedOut = Add()([model1.output,model2.output])
#Add() -> creates a merge layer that sums the inputs
#The second parentheses "calls" the layer with the output tensors of the two models
#it will demand that both model1 and model2 have the same output shape
This same idea apply to all the following layers. We keep updating the output tensor giving it to each layer and getting a new output (if we were interested in creating branches, we would use a different var for each output of interest to keep track of them):
mergedOut = Flatten()(mergedOut)
mergedOut = Dense(256, activation='relu')(mergedOut)
mergedOut = Dropout(.5)(mergedOut)
mergedOut = Dense(128, activation='relu')(mergedOut)
mergedOut = Dropout(.35)(mergedOut)
# output layer
mergedOut = Dense(5, activation='softmax')(mergedOut)
Now that we created the "path", it's time to create the Model
. Creating the model is just like telling at which input tensors it starts and where it ends:
from keras.models import Model
newModel = Model([model1.input,model2.input], mergedOut)
#use lists if you want more than one input or output
Notice that since this model has two inputs, you have to train it with two different X_training
vars in a list:
newModel.fit([X_train_1, X_train_2], Y_train, ....)
Now, suppose you wanted only one input, and both model1 and model2 would take the same input.
The functional API allows that quite easily by creating an input tensor and feeding it to the models (we call the models as if they were layers):
commonInput = Input(input_shape)
out1 = model1(commonInput)
out2 = model2(commonInput)
mergedOut = Add()([out1,out2])
In this case, the Model would consider this input:
oneInputModel = Model(commonInput,mergedOut)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With