I am new to keras and tensorflow . How do I go about implementing a custom loss function while doing object detection , right now I have 5 parameters - 4 for bounding box coordinates and 1 for whether the object is present or not . Loss function should return square of difference between coordinates if object is present else if object is absent it should return a huge value as loss . This is the code I am tring right now:
def loss_func(y_true,y_pred):
mask = np.array([False, False, False,False,True]) # check column of the class of object
mask1 = np.array([True, True, True,True,False]) # get the columns of the coordinates of B box
check_class = K.mean(K.square(tf.subtract(tf.boolean_mask(y_true,mask),tf.boolean_mask(y_pred,mask))))
mean_square = K.mean(K.square(tf.subtract(tf.boolean_mask(y_true,mask1),tf.boolean_mask(y_pred,mask1))))
value=K.mean(tf.boolean_mask(y_pred,mask))
return value*mean_square + check_class
Here I am masking other values to obtain the last value which is 1000--> object present 0 --> object absent. Is there any other better way to do this?
The value of loss when I am running this in Kaggle decreases rapidly , by 2nd epoch the loss becomes 0.
We can create a custom loss function in Keras by writing a function that returns a scalar and takes two arguments: namely, the true value and predicted value. Then we pass the custom loss function to model. compile as a parameter like we we would with any other loss function.
The loss functions of object detection can be categorized as two sorts: the classification loss and the localization loss. The former is applied to train the classify head for determining the type of target object, and the latter is used to train another head for regressing a rectangular box to locate target object.
A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.
First of all i would recommend using 1 and not 1000 for the "image exist" parameter.
You can manipulate y_true and y_pred.
penalty = 100
def lf(y_true,y_pred):
mean_square = tf.keras.losses.mean_squared_error(y_true[:,0:4], y_pred[:,0:4])
check_class = tf.subtract(y_true[:,4], y_pred[:,4])
check_class = check_class * -penalty
check_class = tf.keras.backend.mean(check_class)
return mean_square + check_class
The above function first check the mean squared error for the first 4 parameters.
the second part then check for the "present" parameter.
If they are different it will output -1, if they are the same it will output 0.
Then it use the penalty for punishing it for a wrong parameter.
Using "punish" by some constant can be difficult to train. I would recommend changing the optimizer to SGD, adam will not work well in the situation, and playing with the penalty until you reach satisfying results.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With