The issue is related to: InvalidArgumentError (see above for traceback): indices[1] = 10 is not in [0, 10) I need it for R and therefore another solution than given in the link above.
maxlen <- 40
chars <- c("'", "-", " ", "!", "\"", "(", ")", ",", ".", ":", ";", "?", "[", "]", "_", "=", "0", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z")
tokenizer <- text_tokenizer(char_level = T, filters = NULL)
tokenizer %>% fit_text_tokenizer(chars)
unlist(tokenizer$word_index)
Output is:
' - ! " ( ) , . : ; ? [ ] _ = 0 a b c d e f g h i j k l m n o p q r s t u v w x y z
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
How can I change the indexing so it starts from 0 not from 1 in text_tokenizer?
The error I get after running fit() is as follows:
InvalidArgumentError: indices[127,7] = 43 is not in [0, 43)
[[Node: embedding_3/embedding_lookup = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@training_1/RMSprop/Assign_1"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_3/embeddings/read, embedding_3/Cast, training_1/RMSprop/gradients/embedding_3/embedding_lookup_grad/concat/axis)]]
But I believe that changing the Indexing will solve my problem.
Index 0 is often reserved for padding so it is not a wise idea to start your actual character indices from 0 as well. Instead you should venture to the Embedding
layer and add 1 to the input size as suggested by the documentation:
input_dim: int > 0. Size of the vocabulary, i.e. maximum integer index + 1.
In your case this would be 43 + 1 = 44.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With