I am using the Keras functional API to create a neural net that takes a word embedding layer as input for a sentence classification task. But my code breaks right at the beginning of connecting the input and the embedding layers. Following a tutorial at https://medium.com/tensorflow/predicting-the-price-of-wine-with-the-keras-functional-api-and-tensorflow-a95d1c2c1b03, I have code like below:
max_seq_length=100 #i.e., sentence has a max of 100 words
word_weight_matrix = ... #this has a shape of 9825, 300, i.e., the vocabulary has 9825 words and each is a 300 dimension vector
deep_inputs = Input(shape=(max_seq_length,))
embedding = Embedding(9825, 300, input_length=max_seq_length,
weights=word_weight_matrix, trainable=False)(deep_inputs) # line A
hidden = Dense(targets, activation="softmax")(embedding)
model = Model(inputs=deep_inputs, outputs=hidden)
Then line A causes an error that states below:
ValueError: You called `set_weights(weights)` on layer "embedding_1" with a weight list of length 9825, but the layer was expecting 1 weights. Provided weights: [[-0.04057981 0.05743935 0.0109863 ..., 0.0072...
And I don't really understand what the error means...
It seems that the Input layer isn't defined properly... Previously when I use the Sequential model with the embedding layer defined exactly the same, everything works OK. But when I switch to functional API, I have this error.
Any help much appreciated, thanks in advance
input_length: This is the length of input sequences, as you would define for any input layer of a Keras model. For example, if all of your input documents are comprised of 1000 words, this would be 1000.
Embedding layer is one of the available layers in Keras. This is mainly used in Natural Language Processing related applications such as language modeling, but it can also be used with other tasks that involve neural networks. While dealing with NLP problems, we can use pre-trained word embeddings such as GloVe.
In a Keras layer, the input shape is generally the shape of the input data provided to the Keras model while training. The model cannot know the shape of the training data. The shape of other tensors(layers) is computed automatically.
An embedding layer is faster, because it is essentially the equivalent of a dense layer that makes simplifying assumptions. A Dense layer will treat these like actual weights with which to perform matrix multiplication.
Try this updated code: you have to use len(vocabulary) + 1
in Embedding layer! and weights=[word_weight_matrix]
max_seq_length=100 #i.e., sentence has a max of 100 words
word_weight_matrix = ... #this has a shape of 9825, 300, i.e., the vocabulary has 9825 words and each is a 300 dimension vector
deep_inputs = Input(shape=(max_seq_length,))
embedding = Embedding(9826, 300, input_length=max_seq_length,
weights=[word_weight_matrix], trainable=False)(deep_inputs) # line A
hidden = Dense(targets, activation="softmax")(embedding)
model = Model(inputs=deep_inputs, outputs=hidden)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With