I'm learning tensorflow's wide_n_deep_tutorial these days, and I'm a little bit confused with the tf.contrib.layers.embedding_column. I wonder how does tensorflow implement the embedding column?
For example, suppose I have an sparse input with dimension 1000 and I want to embed it into a dense feature with dimension 10. Does it hold a fully connected network with 1000*10 params and train using BP to update the params? Or does it use some other techniques like FM to map the 1000 dim vector to a 10 dim vector?
There are 3 combiner in the embedding_column function:
"sum": do not normalize "mean": do l1 normalization "sqrtn": do l2 normalization. see more tf.embedding_lookup_sparse
There are not using FM to modulate/transform the dimensions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With