Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use tensorflow seq2seq without embeddings?

I have been working on LSTM for timeseries forecasting by using tensorflow. Now, i want to try sequence to sequence (seq2seq). In the official site there is a tutorial which shows NMT with embeddings . So, how can I use this new seq2seq module without embeddings? (directly using time series "sequences").

# 1. Encoder
encoder_cell = tf.contrib.rnn.BasicLSTMCell(LSTM_SIZE)
encoder_outputs, encoder_state = tf.nn.static_rnn(
  encoder_cell,
  x,
  dtype=tf.float32)

# Decoder
decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_SIZE)


helper = tf.contrib.seq2seq.TrainingHelper(
    decoder_emb_inp, decoder_lengths, time_major=True)


decoder = tf.contrib.seq2seq.BasicDecoder(
  decoder_cell, helper, encoder_state)

# Dynamic decoding
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder)
outputs = outputs[-1]

# output is result of linear activation of last layer of RNN
weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
bias = tf.Variable(tf.random_normal([N_OUTPUTS]))
predictions = tf.matmul(outputs, weight) + bias

What should be the args for TrainingHelper() if I use input_seq=x and output_seq=label?

decoder_emb_inp ??? decoder_lengths ???

Where input_seq are the first 8 point of the sequence, and output_seq are the last 2 point of the sequence. Thanks on advance!

like image 537
dnovai Avatar asked Mar 06 '18 15:03

dnovai


People also ask

How to build Seq2Seq model?

Steps to build Seq2Seq modelYou can separate the entire model into 2 small sub-models. The first sub-model is called as [E] Encoder, and the second sub-model is called as [D] Decoder. [E] takes a raw input text data just like any other RNN architectures do. At the end, [E] outputs a neural representation.

What is Seq2Seq Tensorflow?

tf-seq2seq is a general-purpose encoder-decoder framework for Tensorflow that can be used for Machine Translation, Text Summarization, Conversational Modeling, Image Captioning, and more.

What is Seq2Seq model?

A Seq2Seq model is a model that takes a sequence of items (words, letters, time series, etc) and outputs another sequence of items. Seq2Seq Model. In the case of Neural Machine Translation, the input is a series of words, and the output is the translated series of words.


Video Answer


1 Answers

I got it to work for no embedding using a very rudimentary InferenceHelper:

inference_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[dim],
        sample_dtype=dtypes.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

My inputs are floats with the shape [batch_size, time, dim]. For the example below dim would be 1, but this can easily be extended to more dimensions. Here's the relevant part of the code:

projection_layer = tf.layers.Dense(
    units=1,  # = dim
    kernel_initializer=tf.truncated_normal_initializer(
        mean=0.0, stddev=0.1))

# Training Decoder
training_decoder_output = None
with tf.variable_scope("decode"):
    # output_data doesn't exist during prediction phase.
    if output_data is not None:
        # Prepend the "go" token
        go_tokens = tf.constant(go_token, shape=[batch_size, 1, 1])
        dec_input = tf.concat([go_tokens, target_data], axis=1)

        # Helper for the training process.
        training_helper = tf.contrib.seq2seq.TrainingHelper(
            inputs=dec_input,
            sequence_length=[output_size] * batch_size)

        # Basic decoder
        training_decoder = tf.contrib.seq2seq.BasicDecoder(
            dec_cell, training_helper, enc_state, projection_layer)

        # Perform dynamic decoding using the decoder
        training_decoder_output = tf.contrib.seq2seq.dynamic_decode(
            training_decoder, impute_finished=True,
            maximum_iterations=output_size)[0]

# Inference Decoder
# Reuses the same parameters trained by the training process.
with tf.variable_scope("decode", reuse=tf.AUTO_REUSE):
    start_tokens = tf.constant(
        go_token, shape=[batch_size, 1])

    # The sample_ids are the actual output in this case (not dealing with any logits here).
    # My end_fn is always False because I'm working with a generator that will stop giving 
    # more data. You may extend the end_fn as you wish. E.g. you can append end_tokens 
    # and make end_fn be true when the sample_id is the end token.
    inference_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[1],  # again because dim=1
        sample_dtype=dtypes.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

    # Basic decoder
    inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                        inference_helper,
                                                        enc_state,
                                                        projection_layer)

    # Perform dynamic decoding using the decoder
    inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(
        inference_decoder, impute_finished=True,
        maximum_iterations=output_size)[0]

Have a look at this question. Also I found this tutorial to be very useful to understand seq2seq models, although it does use embeddings. So replace their GreedyEmbeddingHelper by an InferenceHelper like the one I posted above.

P.s. I posted the full code at https://github.com/Andreea-G/tensorflow_examples

like image 94
Andreea-G Avatar answered Oct 09 '22 21:10

Andreea-G