Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Adding a variable into Keras/TensorFlow CNN dense layer

I was wondering if it is possible to add a variable into a convolutional neural network's dense layer (so as well as the connections from the previous convolutional layers, there would be an additional feature set that could be used for discriminatory purposes)? If this is possible, can anyone point me to an example/documentation explaining how to do so?

I am hoping to use Keras, but am happy to use TensorFlow if Keras is too restrictive.

EDIT: In this case, the way that I would think that this should work is that I provide a list containing images and associated feature sets to the neural network (and during training the associated classifications).

EDIT2: The architecture that I want looks something like:

              ___________      _________      _________      _________     ________    ______
              | Conv    |     | Max    |     | Conv    |     | Max    |    |       |   |     |
    Image --> | Layer 1 | --> | Pool 1 | --> | Layer 2 | --> | Pool 2 | -->|       |   |     |
              |_________|     |________|     |_________|     |________|    | Dense |   | Out |
                                                                           | Layer |-->|_____|
   Other      ------------------------------------------------------------>|       |
   Data                                                                    |       |
                                                                           |_______|
like image 278
Thomas Russell Avatar asked Mar 02 '17 13:03

Thomas Russell


2 Answers

Indeed, as @Marcin said, you can use a merge layer.

I advise you to use the Functionnal API for this. If you're not familiar with it, read some doc here.

Here is your doodled network model using the keras API :

from keras.layers.core import *
from keras.models import Model

# this is your image input definition. You have to specify a shape. 
image_input = Input(shape=(32,32,3))
# Some more data input with 10 features (eg.)
other_data_input = Input(shape=(10,))    

# First convolution filled with random parameters for the example
conv1 = Convolution2D(nb_filter = nb_filter1, nb_row = nb_row1, nb_col=_nb_col1, padding = "same", activation = "tanh")(image_input)
# MaxPool it 
conv1 = MaxPooling2D(pool_size=(pool_1,pool_2))(conv1)
# Second Convolution
conv2 = Convolution2D(nb_filter = nb_filter2, nb_row = nb_row2, nb_col=_nb_col2, padding = "same", activation = "tanh")(conv1)
# MaxPool it
conv2  = MaxPooling2D(pool_size=(pool_1,pool_2))(conv2)
# Flatten the output to enable the merge to happen with the other input
first_part_output = Flatten()(conv2)

# Merge the output of the convNet with your added features by concatenation
merged_model = keras.layers.concatenate([first_part_output, other_data_input])

# Predict on the output (say you want a binary classification)
predictions = Dense(1, activation ='sigmoid')(merged_model)

# Now create the model
model = Model(inputs=[image_input, other_data_input], outputs=predictions)
# see your model 
model.summary()

# compile it
model.compile(optimizer='adamax', loss='binary_crossentropy')

There you go :) It is quite easy in the end, define how many inputs and outputs you want, just specify them in a list when you create the Model object. When you fit it, also feed them separately, in a list.

like image 70
Nassim Ben Avatar answered Oct 26 '22 14:10

Nassim Ben


Ok, assuming that you have the convoluton_model you could do this in a following manner:

convolution_model = Flatten()(convolution_model) # if it wasn't flattened before
static_features_input = Input(shape=(static_features_size,))
blended_features = merge([convolution_model, static_features_input], mode='concat')
... here you are defining a blending model with blended features as input

Here you could find an example on how to merge different inputs.

like image 1
Marcin Możejko Avatar answered Oct 26 '22 14:10

Marcin Możejko