I'm trying to learn the state embeddings for a sequence of states produced by a HMM, similar to how the tensorflow Vector Representation of Words does this for text sequences.
My issue is that the "vocabulary" of this HMM is only 12 different states. Tensorflow doesn't seem to like it when I run my code using batches larger than the size of this vocabulary. For example, attempting to train it with a batch size of 14 gives the error:
F tensorflow/core/kernels/range_sampler.cc:86] Check failed: batch_size + avoided_values.size() <= range_ (14 vs. 12)
Abort trap: 6
What is the motivation behind this check?
If you are following the example from the tutorial
This error actually comes when you set the num_sampled > len(vocabulary)
num_sampled = 64 # Number of negative examples to sample.
you cannot indeed sample indexes (for the negative examples in word to vec) beyond the vocabulary size
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With