Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to debug custom loss function in Keras?

I created the custom loss function with parameter.

def w_categorical_crossentropy(weights):
  def loss(y_true, y_pred):
  print(weights)
  print("----")
  print(weights.shape)
  final_mask = K.zeros_like(y_pred[:, 0])
  y_pred_max = K.max(y_pred, axis=1)
  y_pred_max = K.reshape(y_pred_max, (K.shape(y_pred)[0], 1))
  y_pred_max_mat = K.cast(K.equal(y_pred, y_pred_max), K.floatx())
  return K.categorical_crossentropy(y_pred, y_true)
return loss

Now, i need to control weights parameter value, but print function doesn't work properly. Is there any way for printing weights value?

like image 947
Bedrick Kiq Avatar asked Mar 09 '18 20:03

Bedrick Kiq


People also ask

How do I debug keras loss?

The best way to debug is to create some simple fake data which you know what the result is (or can be easily calculated by hand), and compare the result from the function with the true value.

How do I pass custom loss function in keras?

Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.

What is loss function in model compile?

The purpose of loss functions is to compute the quantity that a model should seek to minimize during training.


1 Answers

What I sometimes do (Not the best solution for sure, nor does always possible) is just replace the K backend by np and test it with some simple data. Here is an example

original Keras function:

def loss(y_true, y_pred):
    means = K.reshape(y_pred[:, 0], (-1, 1))
    stds = K.reshape(y_pred[:, 1], (-1, 1))
    var = K.square(stds)
    denom = K.sqrt(2 * np.pi * var)
    prob_num = - K.square(y_true - means) / (2 * var)
    prob = prob_num - denom
    r = K.exp(prob - old_prediction)
    return -K.mean(K.minimum(r * advantage, K.clip(r, min_value=1 - self.LOSS_CLIPPING, max_value=1 + self.LOSS_CLIPPING) * advantage))

testing function:

def loss(y_true, y_pred):
    means = np.reshape(y_pred[:, 0], (-1, 1))
    stds = np.reshape(y_pred[:, 1], (-1, 1))
    var = np.square(stds)
    print(var.shape)
    denom = np.sqrt(2 * np.pi * var)
    print(denom.shape)
    prob_num = - np.square(y_true - means) / (2 * var)
    prob = prob_num - denom
    r = np.exp(prob - old_prediction)
    print(r.shape)
    cliped = np.minimum(r * advantage, np.clip(r, a_min=1 - LOSS_CLIPPING, a_max=1 + LOSS_CLIPPING) * advantage)
    print(cliped.shape)
    return -np.mean(cliped)

Testing it:

LOSS_CLIPPING = 0.2
y_pred = np.array([[2,1], [1, 1], [5, 1]])
y_true = np.array([[1], [3], [2]])
old_prediction = np.array([[-2], [-5], [-6]])
advantage = np.array([[ 0.51467506],[-0.64960159],[-0.53304715]])
loss(y_true, y_pred)

After running above, results:

(3, 1)
(3, 1)
(3, 1)
(3, 1)
0.43409555193679816
like image 55
Julian Avatar answered Oct 27 '22 18:10

Julian