I want to train a model with a shared layer in the following form:
x --> F(x)
==> G(F(x),F(y))
y --> F(y)
x
and y
are two separate input layers and F
is a shared layer. G
is the last layer after concatenating F(x)
and F(y)
.
Is it possible to model this in Keras? How?
Shared layers are an advanced deep learning concept, and are only possible with the Keras functional API. They allow you to define an operation and then apply the exact same operation (with the exact same weights) on different inputs. In this model, we will share team rating for both inputs.
Unlike simply duplicating a layer, sharing a layer lets you make changes to multiple copies by changing only a single linked layer. If you want to duplicate finished layers as shared layers in a new channel, you can select Share Layers As Channel from the context menu or Layers menu.
The Sequential model API is a way of creating deep learning models where an instance of the Sequential class is created and model layers are created and added to it.
The input shape In Keras, the input layer itself is not a layer, but a tensor. It's the starting tensor you send to the first hidden layer. This tensor must have the same shape as your training data. Example: if you have 30 images of 50x50 pixels in RGB (3 channels), the shape of your input data is (30,50,50,3) .
You can use Keras functional API for this purpose:
from keras.layers import Input, concatenate
x = Input(shape=...)
y = Input(shape=...)
shared_layer = MySharedLayer(...)
out_x = shared_layer(x)
out_y = shared_layer(y)
concat = concatenate([out_x, out_y])
# pass concat to other layers ...
Note that x
and y
could be the output tensors of any layer and not necessarily input layers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With