I am trying to understanding concatenating of layers in tensorflow keras. Below I have drew what I think is the concatenation of 2 RNN layers [ Spare for picture clarity] and the output
Here I am trying to concatenate two RNN layers. One layer has longitudinal data[ integer valued ] of patients in some time sequence and other layer has again details of same patients of other time sequence with categorical input.
I don't want these two different time sequences to be mixed up since it is medical data. So I am trying this. But before that I want to be sure if what I have drawn is what concatenating of two layers means. Below is my code. It appears to work well but I want to confirm if my what i drew and what is implemented are correct .
#create simpleRNN with one sequence of input
first_input = Input(shape=(4, 7),dtype='float32')
simpleRNN1 = layers.SimpleRNN(units=25,bias_initializer= initializers.RandomNormal(stddev=0.0001),
activation="relu",kernel_initializer= "random_uniform")(first_input)
#another layer of RNN
second_input = Input(shape=(16,1),dtype='float32')
simpleRNN2 = layers.SimpleRNN(units=25,bias_initializer= initializers.RandomNormal(stddev=0.0001),
activation="relu",kernel_initializer= "random_uniform")(second_input)
#concatenate two layers,stack dense layer on top
concat_lay = tf.keras.layers.Concatenate()([simpleRNN1, simpleRNN2])
dens_lay = layers.Dense(64, activation='relu')(concat_lay)
dens_lay = layers.Dense(32, activation='relu')(dens_lay)
dens_lay = layers.Dense(1, activation='sigmoid')(dens_lay)
model = tf.keras.Model(inputs=[first_input, second_input], outputs= [dens_lay])
model.compile(loss='binary_crossentropy', optimizer='adam',metrics=["accuracy"],lr=0.001)
model.summary()
There are three built-in RNN layers in Keras: keras. layers.
Definitely you can have multiple hidden layers in RNN. One the most common approaches to determine the hidden units is to start with a very small network (one hidden unit) and apply the K-fold cross validation ( k over 30 will give very good accuracy) and estimate the average prediction risk.
Multilayer RNNs generalize both feed-forward neural nets and one-hidden-layer RNNs. Deep learning has arguably achieved tremendous success in recent years. In simple words, deep learning uses the composition of many nonlinear functions to model the complex dependency between input features and labels.
Basically, the RNN layer is comprised of a single rolled RNN cell that unrolls according to the “number of steps” value (number of time steps/segments) you provide. As we mentioned earlier the main speciality in RNNs is the ability to model short term dependencies. This is due to the hidden state in the RNN.
Concatenation means 'chaining together' or 'unification' here, making a union of two enities.
i think your problem is addressed in https://datascience.stackexchange.com/questions/29634/how-to-combine-categorical-and-continuous-input-features-for-neural-network-trai (How to combine categorical and continuous input features for neural network training)
If you have biomedical data, i.e. ECG, as the continuous data and diagnoses as the categorical data i would consider ensemble learning as the best ansatz.
What is the best solution here depends on the details of your problem ...
Building an ensembleof two neural nets is described in https://machinelearningmastery.com/ensemble-methods-for-deep-learning-neural-networks/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With