I want to define lambda layer to combine features with cross product, then merge those models,just like the fig. ,What should I do?
Test model_1, get 128 dimensions form dense, use pywt
get two 64 dimensions feature(cA,cD
), then return cA*cD //of course I want to combine two models ,but try model_1 first.
from keras.models import Sequential,Model
from keras.layers import Input,Convolution2D,MaxPooling2D
from keras.layers.core import Dense,Dropout,Activation,Flatten,Lambda
import pywt
def myFunc(x):
(cA, cD) = pywt.dwt(x, 'db1')
# x=x*x
return cA*cD
batch_size=32
nb_classes=3
nb_epoch=20
img_rows,img_cols=200,200
img_channels=1
nb_filters=32
nb_pool=2
nb_conv=3
inputs=Input(shape=(1,img_rows,img_cols))
x=Convolution2D(nb_filters,nb_conv,nb_conv,border_mode='valid',
input_shape=(1,img_rows,img_cols),activation='relu')(inputs)
x=Convolution2D(nb_filters,nb_conv,nb_conv,activation='relu')(x)
x=MaxPooling2D(pool_size=(nb_pool,nb_pool))(x)
x=Dropout(0.25)(x)
x=Flatten()(x)
y=Dense(128,activation='relu')(x)
cross=Lambda(myFunc,output_shape=(64,))(y)
predictions=Dense(nb_classes,activation='softmax')(cross)
model = Model(input=inputs, output=predictions)
model.compile(loss='categorical_crossentropy',optimizer='adadelta',metrics=['accuracy'])
model.fit(X_train,Y_train,batch_size=batch_size,nb_epoch=nb_epoch,
verbose=1,validation_data=(X_test,Y_test))
Sorry, can I ask a question about tensor?
import tensorflow as tf
W1 = tf.Variable(np.array([[1,2],[3,4]]))
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
array = W1.eval(sess)
print (array)
That's right! However,
from keras import backend as K
import numpy as np
kvar=K.variable(np.array([[1,2],[3,4]]))
K.eval(kvar)
print(kvar)
I got <CudaNdarrayType(float32, matrix)>
and kvar.eval()
I got b'CudaNdarray([[ 1. 2.]\n [ 3. 4.]])'
. I use keras, so how can I get array like tensorflow using keras?
The Lambda layer exists so that arbitrary expressions can be used as a Layer when constructing Sequential and Functional API models. Lambda layers are best suited for simple operations or quick experimentation. For more advanced use cases, follow this guide for subclassing tf.
the lambda layer has its own function to perform editing in the input data. Using the lambda layer in a neural network we can transform the input data where expressions and functions of the lambda layer are transformed. By Yugesh Verma.
I would probaly duplicate the dense layers. Instead of having 2 layers with 128 units, have 4 layers with 64 units. The result is the same, but you will be able to perform the cross products better.
from keras.models import Model
#create dense layers and store their output tensors, they use the output of models 1 and to as input
d1 = Dense(64, ....)(Model_1.output)
d2 = Dense(64, ....)(Model_1.output)
d3 = Dense(64, ....)(Model_2.output)
d4 = Dense(64, ....)(Model_2.output)
cross1 = Lambda(myFunc, output_shape=....)([d1,d4])
cross2 = Lambda(myFunc, output_shape=....)([d2,d3])
#I don't really know what kind of "merge" you want, so I used concatenate, there are Add, Multiply and others....
output = Concatenate()([cross1,cross2])
#use the "axis" attribute of the concatenate layer to define better which axis will be doubled due to the concatenation
model = Model([Model_1.input,Model_2.input], output)
Now, for the lambda function:
import keras.backend as K
def myFunc(x):
return x[0] * x[1]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With