I want to create a neural network with my own loss function. For this purpose, I created this loss function:
class my_loss(tf.keras.losses.Loss):
def __init__(self,e1,e2,**kwargs):
assert e1 > e2 , "e1 must be greater than e2"
self.e1 = e1
self.e2 = e2
super().__init__(**kwargs)
def call(self,Y_true,Y_pred):
d = tf.reduce_mean(tf.abs(Y_true-Y_pred))
l1 = d**1.5 # Where the error is large, show the loss much more
l2 = d*1.5 # Where the error is moderate, show the loss slightly more.
l3 = d
res = tf.experimental.numpy.select([d >= self.e1,self.e2 < d < self.e1,d <= self.e2], [l1,l2,l3])
return res
def get_config(self):
parent_config = super().get_config()
return {**parent_config,"e1":self.e1,"e2":self.e2}
model = tf.keras.models.Sequential()
model.add(layers.Dense(50,input_dim=9)) # Length of features is 9
model.add(layers.Dense(50))
model.add(layers.Dense(50))
model.add(layers.Dense(1))
model.compile(
loss=my_loss(2,0.5),
optimizer="adam",
# metrics=["accuracy"]
)
hist = model.fit(x_train,y_train,epochs=50)
But I get this error when fitting the model
Output: OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
You're getting that error when you do tf.experimental.numpy.select, right?
It is because, as the error suggests, you can't use a tf.Tensor as a Python bool. So you cannot do something like this d >= self.e1. You have to use proper tf functions to do that kind of operations.
In particular tf.math.logical_and, returns the truth value of x AND y element-wise. And tf.math.greater_equal returns the truth value of (x >= y) element-wise.
So, in order to fix the error substitute that line with this:
res = tf.experimental.numpy.select([
tf.greater_equal(d, self.e1),
tf.math.logical_and(self.e2 < d, d < self.e1),
tf.greater_equal(self.e2, d)
], [l1,l2,l3]
)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With