I am trying to normalize a layer in my neural network using l2 normalization. I want to divide each node/element in a specific layer by its l2 norm (the square root of the sum of squared elements), and my assumption is that keras' l2_normalize can achieve this: https://www.tensorflow.org/api_docs/python/tf/keras/backend/l2_normalize?version=stable. However, I am not sure how to actually use this since there are no examples in the documentation. I found other examples that use a lambda function along with it, for example Lambda(lambda x: K.l2_normalize(x,axis=1))(previous_layer)
. However, I am not sure why this needs to be done? Would appreciate help on how keras.backend.l2_normalize
should be used and why a lambda function might be needed. Thanks!
Here is how I'd want to be used:
autoencoder = Sequential()
# Encoder Layer
autoencoder.add(Dense(encoded_dim, input_shape=(input_dim,),
activation='relu'))
# Normalization - Need help here!
# TODO: Add l2_normalize here
# Decoder Layer
# TODO: Add final output layer here
Do as the example you mentioned. It's ok.
You need a Layer
for every operation in the model, backend operations are no exception, and that's the reason for the Lambda
layer. (Keras needs layers to do its magic).
import keras.backend as K
autoencoder.add(Lambda(lambda x: K.l2_normalize(x,axis=1)))
If you are planning to get the encoder's output later for other things, I suggest you create the encoder
and decoder
as separate models:
encoder = Sequential()
#.... add encoder layers ....
#encoder.add(....)
decoder = Sequential()
#.... add decoder layers ....
#decoder.add(....)
Finally create the autoencoder as another model, for training:
from keras.layers import Input
from keras.models import Model
autoInput = Input(shape_of_the_encoder_input)
encodedData = encoder(autoInput)
decodedData = decoder(encodedData)
autoencoder = Model(autoInput, decodedData)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With