Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras one hot embedding before LSTM

Suppose I have a training dataset as several sequences with padded length = 40 and a dictionary length of 80, e.g., example = [0, 0, 0, 3, 4, 9, 22, ...] and I want to feed that into a LSTM layer. What I want to do is to apply one hot encoder to the sequences, e.g., example_after_one_hot.shape = (40, 80). Is there a keras layer that is able to do this? I have tried Embedding, however, it seems that is not an one-hot encoding.

Edit: another way is to use Embedding layer. Given the dictionary only contains 80 different keys, how should I set the output of Embedding layer?

like image 942
John M. Avatar asked Dec 09 '25 11:12

John M.


1 Answers

I think you're looking for a pre-processing task, not something that is strictly part of your network.

Keras has a one-hot text pre-processing function that may be able to help you. Take a look at Keras text preprocessing. If this doesn't fit your needs, it's fairly easy to pre-process it yourself with numpy. You can do something like...

X = numpy.zeros(shape=(len(sentences), 40, 80), dtype='float32')
for i, sent in enumerate(sentences):
    for j, word in enumerate(sent):
        X[i, j, word] = 1.0

This will give you a one-hot encoding for a 2D-array of "sentences", where each word in the array is an integer less than 80. Of course the data doesn't have to be sentences, it can be any type of data.

Note that Embedding layers are for for learning a distributed representation of the data not for putting data in a one-hot format.

like image 186
bivouac0 Avatar answered Dec 12 '25 02:12

bivouac0



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!