Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Custom loss function in Keras

I'm working on a image class-incremental classifier approach using a CNN as a feature extractor and a fully-connected block for classifying.

First, I did a fine-tuning of a VGG per-trained network to do a new task. Once the net is trained for the new task, i store some examples for every class in order to avoid forgetting when new classes are available.

When some classes are available, i have to compute every output of the exemplars included the exemplars for the new classes. Now adding zeros to the outputs for old classes and adding the label corresponding to each new class on the new classes outputs i have my new labels, i.e: if 3 new classes enter....

Old class type output: [0.1, 0.05, 0.79, ..., 0 0 0]

New class type output: [0.1, 0.09, 0.3, 0.4, ..., 1 0 0] **the last outputs correspond to the class.

My question is, how i can change the loss function for a custom one to train for the new classes? The loss function that i want to implement is defined as:

loss function

where distillation loss corresponds to the outputs for old classes to avoid forgetting, and classification loss corresponds to the new classes.

If you can provide me a sample of code to change the loss function in keras would be nice.

Thanks!!!!!

like image 781
Eric Avatar asked May 06 '17 08:05

Eric


People also ask

What is custom loss function in Tensorflow?

Training with Custom LossWe first define a function that accepts the ground truth labels ( y_true ) and model predictions ( y_pred ) as parameters. We then compute and return the loss value in the function definition. Using the loss function is as simple as specifying the loss function in the loss argument of model.

How do I choose a keras loss function?

The mean squared error loss function can be used in Keras by specifying 'mse' or 'mean_squared_error' as the loss function when compiling the model. It is recommended that the output layer has one node for the target variable and the linear activation function is used.

How do you make a loss function in Tensorflow?

Tensorflow custom loss function numpy To do this task first we will create an array with sample data and find the mean squared value with the numpy() function. Next, we will use the tf. keras. Sequential() function and assign the dense value with input shape.


2 Answers

All you have to do is define a function for that, using keras backend functions for calculations. The function must take the true values and the model predicted values.

Now, since I'm not sure about what are g, q, x an y in your function, I'll just create a basic example here without caring about what it means or whether it's an actual useful function:

import keras.backend as K  def customLoss(yTrue,yPred):     return K.sum(K.log(yTrue) - K.log(yPred))      

All backend functions can be seen here.

After that, compile your model using that function instead of a regular one:

model.compile(loss=customLoss, optimizer = .....) 
like image 101
Daniel Möller Avatar answered Sep 24 '22 20:09

Daniel Möller


Since Keras is not multi-backend anymore (source), operations for custom losses should be made directly in Tensorflow, rather than using the backend.

You can make a custom loss with Tensorflow by making a function that takes y_true and y_pred as arguments, as suggested in the documentation:

import tensorflow as tf  x = tf.random.uniform(minval=0, maxval=1, shape=(10, 1), dtype=tf.float32) y = tf.random.uniform(minval=0, maxval=1, shape=(10, 1), dtype=tf.float32)  def custom_mse(y_true, y_pred):     squared_difference = tf.square(y_true - y_pred)     return tf.reduce_mean(squared_difference, axis=-1)  custom_mse(x, y) 
<tf.Tensor: shape=(10,), dtype=float32, numpy= array([0.30084264, 0.03535452, 0.10345092, 0.28552982, 0.02426687,        0.04410492, 0.01701574, 0.55496216, 0.74927425, 0.05747304],       dtype=float32)> 

Then you can set your custom loss in model.compile(). Here's a complete example:

x = tf.random.uniform(minval=0, maxval=1, shape=(1000, 4), dtype=tf.float32) y = tf.multiply(tf.reduce_sum(x, axis=-1), 5) # y is a function of x  model = tf.keras.Sequential([     tf.keras.layers.Dense(16, input_shape=[4], activation='relu'),     tf.keras.layers.Dense(32, activation='relu'),     tf.keras.layers.Dense(1) ])  model.compile(loss=custom_mse, optimizer='adam')  history = model.fit(x, y, epochs=10) 
Train on 1000 samples Epoch 1/5   32/1000 [..............................] - ETA: 10s - loss: 99.5402 1000/1000 [==============================] - 0s 371us/sample - loss: 105.6800 Epoch 2/5   32/1000 [..............................] - ETA: 0s - loss: 89.2909 1000/1000 [==============================] - 0s 35us/sample - loss: 98.8208 Epoch 3/5   32/1000 [..............................] - ETA: 0s - loss: 86.4339 1000/1000 [==============================] - 0s 34us/sample - loss: 82.7988 Epoch 4/5   32/1000 [..............................] - ETA: 0s - loss: 75.2580 1000/1000 [==============================] - 0s 33us/sample - loss: 52.4585 Epoch 5/5   32/1000 [..............................] - ETA: 0s - loss: 28.1625 1000/1000 [==============================] - 0s 34us/sample - loss: 17.8190 
like image 41
Nicolas Gervais Avatar answered Sep 26 '22 20:09

Nicolas Gervais