When I save my model I get the following error:
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-40-853303da8647> in <module>()
      7 
      8 
----> 9 model.save(outdir+'model.h5')
     10 
     11 
5 frames
/usr/local/lib/python3.6/dist-packages/h5py/_hl/group.py in __setitem__(self, name, obj)
    371 
    372             if isinstance(obj, HLObject):
--> 373                 h5o.link(obj.id, self.id, name, lcpl=lcpl, lapl=self._lapl)
    374 
    375             elif isinstance(obj, SoftLink):
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/h5o.pyx in h5py.h5o.link()
RuntimeError: Unable to create link (name already exists)
This does not happen when I use built-in layers to build my model or others user defined layers. This error arises only when I use this particular user defined layer:
class MergeTwo(keras.layers.Layer):
def __init__(self, nout, **kwargs):
    super(MergeTwo, self).__init__(**kwargs)
    self.nout = nout
    self.alpha = self.add_weight(shape=(self.nout,), initializer='zeros',
                             trainable=True)
    self.beta = self.add_weight(shape=(self.nout,), initializer='zeros',
                             trainable=True)
def call(self, inputs):
    A, B = inputs
    result = keras.layers.add([self.alpha*A ,self.beta*B])
    result = keras.activations.tanh(result)
    return result
def get_config(self):
    config = super(MergeTwo, self).get_config()
    config['nout'] = self.nout
    return config
I read the Docs but nothing worked, I cannot figure out why. I am using Google Colab and Tensorflow version 2.2.0
I think the problem is that both of your weight variables have internally the same name, which should not happen, you can give them names with the name parameter to add_weight:
self.alpha = self.add_weight(shape=(self.nout,), initializer='zeros',
                         trainable=True, name="alpha")
self.beta = self.add_weight(shape=(self.nout,), initializer='zeros',
                         trainable=True, name="beta")
This should workaround the problem.
I found another solution, although from a different scenario. I was using Keras-tuner to do some hyper parameter tuning, and when building (E.G 4 models), the layers of the models would have the same name, for each of the models. As i was testing depth of the network as a parameter, i would have multiple lstm,lstm_1, and dense layers in my group. An example of a single model is shown below for context.
Layer (type)                 Output Shape              Param# 
=================================================================
lstm (LSTM)                  (None, 12, 320)           536320
_________________________________________________________________
lstm_1 (LSTM)                (None, 12, 64)            98560
_________________________________________________________________
dense (Dense)                (None, 12, 1)             65
I found that changing the name of the layer using the name parameter, so something unique, meant that i wouldn't get this error. Example for illustration purposes:
unique_id = random.randint(1,99999999) # quite unlike 
model.add(LSTM(64, name="my_layer_name_{}".format(unique_id))) 
When adding the unique_id, onto the custom name, i ensured that each layer for each model in my keras-tuner group would have unique layers.
In your case, you have a single model, however, as the layers are custom, i'm not quite sure how i would name them (given the current format they have). Can you check if this is the case ?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With