Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras - How to construct a shared Embedding() Layer for each Input-Neuron

I want to create a deep neural network in keras, where each element of the input layer is "encoded" using the same, shared Embedding()-layer, before it is fed into the deeper layers.

Each input would be a number that defines the type of an object, and the network should learn an embedding that encapsulates some internal representation of "what this object is".

So, if the input layer has X dimensions, and the embedding has Y dimensions, the first hidden layer should consist of X*Y neurons (each input neuron embedded).

Here is a little image that should show the network architecture that I would like to create, where each input-element is encoded using a 3D-Embedding

How can I do this?

like image 726
Another Coder Avatar asked Feb 08 '17 19:02

Another Coder


People also ask

How do I add an embedded layer in Keras?

We can create a simple Keras model by just adding an embedding layer. In the above example, we are setting 10 as the vocabulary size, as we will be encoding numbers 0 to 9. We want the length of the word vector to be 4, hence output_dim is set to 4. The length of the input sequence to embedding layer will be 2.

How does the embedding layer in Keras work?

Keras provides an embedding layer that converts each word into a fixed-length vector of defined size. The one-hot-encoding technique generates a large sparse matrix to represent a single word, whereas, in embedding layers, every word has a real-valued vector of fixed length.

What is the difference between embedding and dense layer?

In other words, a Dense Layer performs a dot product operation, which is more computationally expensive than the select operation performed by the Embedding layer. This makes the training process much faster. However, the Embedding Layer lacks of some trainable parameters, that are the bias and the activation function.


1 Answers

from keras.layers import Input, Embedding

first_input = Input(shape = (your_shape_tuple) )
second_input = Input(shape = (your_shape_tuple) )
...

embedding_layer = Embedding(embedding_size)

first_input_encoded = embedding_layer(first_input)
second_input_encoded = embedding_layer(second_input)
...

Rest of the model....

The emnedding_layer will have shared weights. You can do this in form of lists of layers if you have a lot of inputs.

If what you want is transforming a tensor of inputs, the way to do it is :

from keras.layers import Input, Embedding

# If your inputs are all fed in one numpy array :
input_layer = Input(shape = (num_input_indices,) )

# the output of this layer will be a 2D tensor of shape (num_input_indices, embedding_size)
embedded_input = Embedding(embedding_size)(input_layer)

Is this what you were looking for?

like image 200
Nassim Ben Avatar answered Oct 16 '22 19:10

Nassim Ben