I try to pass 2 loss functions to a model as Keras allows that.
loss: String (name of objective function) or objective function or Loss instance. See losses. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.
The two loss functions:
def l_2nd(beta):
def loss_2nd(y_true, y_pred):
...
return K.mean(t)
return loss_2nd
and
def l_1st(alpha):
def loss_1st(y_true, y_pred):
...
return alpha * 2 * tf.linalg.trace(tf.matmul(tf.matmul(Y, L, transpose_a=True), Y)) / batch_size
return loss_1st
Then I build the model:
l2 = K.eval(l_2nd(self.beta))
l1 = K.eval(l_1st(self.alpha))
self.model.compile(opt, [l2, l1])
When I train, it produces the error:
1.15.0-rc3 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630:
calling BaseResourceVariable.__init__ (from
tensorflow.python.ops.resource_variable_ops) with constraint is
deprecated and will be removed in a future version. Instructions for
updating: If using Keras pass *_constraint arguments to layers.
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call
last) <ipython-input-20-298384dd95ab> in <module>()
47 create_using=nx.DiGraph(), nodetype=None, data=[('weight', int)])
48
---> 49 model = SDNE(G, hidden_size=[256, 128],)
50 model.train(batch_size=100, epochs=40, verbose=2)
51 embeddings = model.get_embeddings()
10 frames <ipython-input-19-df29e9865105> in __init__(self, graph,
hidden_size, alpha, beta, nu1, nu2)
72 self.A, self.L = self._create_A_L(
73 self.graph, self.node2idx) # Adj Matrix,L Matrix
---> 74 self.reset_model()
75 self.inputs = [self.A, self.L]
76 self._embeddings = {}
<ipython-input-19-df29e9865105> in reset_model(self, opt)
---> 84 self.model.compile(opt, loss=[l2, l1])
85 self.get_embeddings()
86
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/tracking/base.py
in _method_wrapper(self, *args, **kwargs)
455 self._self_setattr_tracking = False # pylint: disable=protected-access
456 try:
--> 457 result = method(self, *args, **kwargs)
458 finally:
459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0)
to a numpy array.
Please help, thanks!
For me, the issue occurred when upgrading from numpy 1.19
to 1.20
and using ray
's RLlib, which uses tensorflow 2.2
internally.
Simply downgrading with
pip install numpy==1.19.5
solved the problem; the error did not occur anymore.
Update (comment by @codeananda): You can also update to a newer TensorFlow (2.6+) version now that resolves the problem (pip install -U tensorflow
).
I found the solution to this problem:
It was because I mixed symbolic tensor with a non-symbolic type, such as a numpy. For example. It is NOT recommended to have something like this:
def my_mse_loss_b(b):
def mseb(y_true, y_pred):
...
a = np.ones_like(y_true) #numpy array here is not recommended
return K.mean(K.square(y_pred - y_true)) + a
return mseb
Instead, you should convert all to symbolic tensors like this:
def my_mse_loss_b(b):
def mseb(y_true, y_pred):
...
a = K.ones_like(y_true) #use Keras instead so they are all symbolic
return K.mean(K.square(y_pred - y_true)) + a
return mseb
Hope this help!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With