There are many objective functions in Keras here.
But how can you create your own objective function, I tried to create a very basic objective function but it gives an error and I there is no way to know the size of the parameters passed to the function at run time.
def loss(y_true,y_pred):
loss = T.vector('float64')
for i in range(1):
flag = True
for j in range(y_true.ndim):
if(y_true[i][j] == y_pred[i][j]):
flag = False
if(flag):
loss = loss + 1.0
loss /= y_true.shape[0]
print loss.type
print y_true.shape[0]
return loss
I am getting 2 contradicting errors,
model.compile(loss=loss, optimizer=ada)
File "/usr/local/lib/python2.7/dist-packages/Keras-0.0.1-py2.7.egg/keras/models.py", line 75, in compile
updates = self.optimizer.get_updates(self.params, self.regularizers, self.constraints, train_loss)
File "/usr/local/lib/python2.7/dist-packages/Keras-0.0.1-py2.7.egg/keras/optimizers.py", line 113, in get_updates
grads = self.get_gradients(cost, params, regularizers)
File "/usr/local/lib/python2.7/dist-packages/Keras-0.0.1-py2.7.egg/keras/optimizers.py", line 23, in get_gradients
grads = T.grad(cost, params)
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 432, in grad
raise TypeError("cost must be a scalar.")
TypeError: cost must be a scalar.
It says cost or loss returned in the function must be a scalar but if I change the line 2 from
loss = T.vector('float64')
to
loss = T.scalar('float64')
it shows this error
model.compile(loss=loss, optimizer=ada)
File "/usr/local/lib/python2.7/dist-packages/Keras-0.0.1-py2.7.egg/keras/models.py", line 75, in compile
updates = self.optimizer.get_updates(self.params, self.regularizers, self.constraints, train_loss)
File "/usr/local/lib/python2.7/dist-packages/Keras-0.0.1-py2.7.egg/keras/optimizers.py", line 113, in get_updates
grads = self.get_gradients(cost, params, regularizers)
File "/usr/local/lib/python2.7/dist-packages/Keras-0.0.1-py2.7.egg/keras/optimizers.py", line 23, in get_gradients
grads = T.grad(cost, params)
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 529, in grad
handle_disconnected(elem)
File "/usr/local/lib/python2.7/dist-packages/theano/gradient.py", line 516, in handle_disconnected
raise DisconnectedInputError(message)
theano.gradient.DisconnectedInputError: grad method was asked to compute the gradient with respect to a variable that is not part of the computational graph of the cost, or is used only by a non-differentiable operator: <TensorType(float64, matrix)>
Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.
Loss calculation is based on the difference between predicted and actual values. If the predicted values are far from the actual values, the loss function will produce a very large number. Keras is a library for creating neural networks.
Loss: A scalar value that we attempt to minimize during our training of the model. The lower the loss, the closer our predictions are to the true labels. This is usually Mean Squared Error (MSE) as David Maust said above, or often in Keras, Categorical Cross Entropy.
Here is my small snippet to write new loss functions and test them before using:
import numpy as np
from keras import backend as K
_EPSILON = K.epsilon()
def _loss_tensor(y_true, y_pred):
y_pred = K.clip(y_pred, _EPSILON, 1.0-_EPSILON)
out = -(y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred))
return K.mean(out, axis=-1)
def _loss_np(y_true, y_pred):
y_pred = np.clip(y_pred, _EPSILON, 1.0-_EPSILON)
out = -(y_true * np.log(y_pred) + (1.0 - y_true) * np.log(1.0 - y_pred))
return np.mean(out, axis=-1)
def check_loss(_shape):
if _shape == '2d':
shape = (6, 7)
elif _shape == '3d':
shape = (5, 6, 7)
elif _shape == '4d':
shape = (8, 5, 6, 7)
elif _shape == '5d':
shape = (9, 8, 5, 6, 7)
y_a = np.random.random(shape)
y_b = np.random.random(shape)
out1 = K.eval(_loss_tensor(K.variable(y_a), K.variable(y_b)))
out2 = _loss_np(y_a, y_b)
assert out1.shape == out2.shape
assert out1.shape == shape[:-1]
print np.linalg.norm(out1)
print np.linalg.norm(out2)
print np.linalg.norm(out1-out2)
def test_loss():
shape_list = ['2d', '3d', '4d', '5d']
for _shape in shape_list:
check_loss(_shape)
print '======================'
if __name__ == '__main__':
test_loss()
Here as you can see I am testing the binary_crossentropy loss, and have 2 separate losses defined, one numpy version (_loss_np) another tensor version (_loss_tensor) [Note: if you just use the keras functions then it will work with both Theano and Tensorflow... but if you are depending on one of them you can also reference them by K.theano.tensor.function or K.tf.function]
Later I am comparing the output shapes and the L2 norm of the outputs (which should be almost equal) and the L2 norm of the difference (which should be towards 0)
Once you are satisfied that your loss function is working properly you can use it as:
model.compile(loss=_loss_tensor, optimizer=sgd)
(Answer Fixed) A simple way to do it is calling Keras backend:
import keras.backend as K
def custom_loss(y_true,y_pred):
return K.mean((y_true - y_pred)**2)
Then:
model.compile(loss=custom_loss, optimizer=sgd,metrics = ['accuracy'])
that equals
model.compile(loss='mean_squared_error', optimizer=sgd,metrics = ['accuracy'])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With