I'm try to code Elastic-Net. It's look likes:
And I want to use this loss function into Keras:
def nn_weather_model():
ip_weather = Input(shape = (30, 38, 5))
x_weather = BatchNormalization(name='weather1')(ip_weather)
x_weather = Flatten()(x_weather)
Dense100_1 = Dense(100, activation='relu', name='weather2')(x_weather)
Dense100_2 = Dense(100, activation='relu', name='weather3')(Dense100_1)
Dense18 = Dense(18, activation='linear', name='weather5')(Dense100_2)
model_weather = Model(inputs=[ip_weather], outputs=[Dense18])
model = model_weather
ip = ip_weather
op = Dense18
return model, ip, op
my loss function is:
def cost_function(y_true, y_pred):
return ((K.mean(K.square(y_pred - y_true)))+L1+L2)
return cost_function
It's mse+L1+L2
and L1 and L2 is
weight1=model.layers[3].get_weights()[0]
weight2=model.layers[4].get_weights()[0]
weight3=model.layers[5].get_weights()[0]
L1 = Calculate_L1(weight1,weight2,weight3)
L2 = Calculate_L2(weight1,weight2,weight3)
I use Calculate_L1 function to sum of the weight of dense1 & dense2 & dense3 and Calculate_L2 do it again.
When I train RB_model.compile(loss = cost_function(),optimizer= 'RMSprop')
the L1 and L2 variable didn't update every batch. So I try to use callback when batch_begin while using:
class update_L1L2weight(Callback):
def __init__(self):
super(update_L1L2weight, self).__init__()
def on_batch_begin(self,batch,logs=None):
weight1=model.layers[3].get_weights()[0]
weight2=model.layers[4].get_weights()[0]
weight3=model.layers[5].get_weights()[0]
L1 = Calculate_L1(weight1,weight2,weight3)
L2 = Calculate_L2(weight1,weight2,weight3)
How could I use callback in the batch_begin calculate L1 and L2 done, and pass L1,L2 variable into loss funtion?
Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.
kernel_regularizer : Regularizer to apply a penalty on the layer's kernel. bias_regularizer : Regularizer to apply a penalty on the layer's bias. activity_regularizer : Regularizer to apply a penalty on the layer's output.
The loss function should take only 2 arguments, which are target value (y_true) and predicted value (y_pred) . Because in order to measure the error in prediction(loss) we need these 2 values. These arguments are passed from the model itself at the time of fitting the data.
A loss function is one of the two arguments required for compiling a Keras model: from tensorflow import keras from tensorflow.keras import layers model = keras. Sequential() model. add(layers. Dense(64, kernel_initializer='uniform', input_shape=(10,))) model.
You can simply use built-in weight regularization in Keras for each layer. To do that you can use kernel_regularizer
parameter of the layer and specify a regularizer for that. For example:
from keras import regularizers
model.add(Dense(..., kernel_regularizer=regularizers.l2(0.1)))
Those regularizations would create a loss tensor which would be added to the loss function, as implemented in Keras source code:
# Add regularization penalties
# and other layer-specific losses.
for loss_tensor in self.losses:
total_loss += loss_tensor
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With