Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow value error: Variable already exists, disallowed

I am predicting financial time series with different time periods using tensorflow. In order to divide input data, I made sub-samples and used for loop. However, I got an ValueError like this;

ValueError: Variable rnn/basic_lstm_cell/weights already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

Without subsample this code works well. Below is my code.

    import tensorflow as tf
    import numpy as np
    import matplotlib
    import os
    import matplotlib.pyplot as plt

    class lstm:
        def __init__(self, x, y):
            # train Parameters
            self.seq_length = 50
            self.data_dim = x.shape[1]
            self.hidden_dim = self.data_dim*2
            self.output_dim = 1
            self.learning_rate = 0.0001
            self.iterations = 5 # originally 500

        def model(self,x,y):
            # build a dataset
            dataX = []
            dataY = []
            for i in range(0, len(y) - self.seq_length):
                _x = x[i:i + self.seq_length]
                _y = y[i + self.seq_length]
                dataX.append(_x)
                dataY.append(_y)

            train_size = int(len(dataY) * 0.7977)
            test_size = len(dataY) - train_size
            trainX, testX = np.array(dataX[0:train_size]), np.array(dataX[train_size:len(dataX)])
            trainY, testY = np.array(dataY[0:train_size]), np.array(dataY[train_size:len(dataY)])
            print(train_size,test_size)

            # input place holders
            X = tf.placeholder(tf.float32, [None, self.seq_length,         self.data_dim])
            Y = tf.placeholder(tf.float32, [None, 1])

            # build a LSTM network
            cell = tf.contrib.rnn.BasicLSTMCell(num_units=self.hidden_dim,state_is_tuple=True,  activation=tf.tanh) 
            outputs, _states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
            self.Y_pred = tf.contrib.layers.fully_connected(outputs[:, -1], self.output_dim, activation_fn=None) 
            # We use the last cell's output

            # cost/loss
            loss = tf.reduce_sum(tf.square(self.Y_pred - Y))  # sum of the squares
            # optimizer
            optimizer = tf.train.AdamOptimizer(self.learning_rate)
            train = optimizer.minimize(loss)

            # RMSE
            targets = tf.placeholder(tf.float32, [None, 1])
            predictions = tf.placeholder(tf.float32, [None, 1])
            rmse = tf.sqrt(tf.reduce_mean(tf.square(targets - predictions)))

            # training
            with tf.Session() as sess:
                init = tf.global_variables_initializer()
                sess.run(init)

                # Training step
                for i in range(self.iterations):
                    _, step_loss = sess.run([train, loss], feed_dict={X: trainX, Y: trainY})

                # prediction
                train_predict = sess.run(self.Y_pred, feed_dict={X: trainX})
                test_predict = sess.run(self.Y_pred, feed_dict={X: testX})

            return train_predict, test_predict 

    # variables definition
    tsx = []
    tsy = []
    tsr = []
    trp = []
    tep = []

    x = np.loadtxt('data.csv', delimiter=',') # data for analysis
    y = x[:,[-1]]
    z = np.loadtxt('rb.csv', delimiter=',')   # data for time series
    z1 = z[:,0] # start cell
    z2 = z[:,1] # end cell

    for i in range(1): # need to change to len(z)
        globals()['x_%s' % i] = x[int(z1[i]):int(z2[i]),:] # definition of x
        tsx.append(globals()["x_%s" % i])

        globals()['y_%s' % i] = y[int(z1[i])+1:int(z2[i])+1,:] # definition of y
        tsy.append(globals()["y_%s" % i])

        globals()['a_%s' % i] = lstm(tsx[i],tsy[i]) # definition of class  

        globals()['trp_%s' % i],globals()['tep_%s' % i] = globals()["a_%s" % i].model(tsx[i],tsy[i])
        trp.append(globals()["trp_%s" % i])
        tep.append(globals()["tep_%s" % i])
like image 871
황재학 Avatar asked Sep 05 '17 13:09

황재학


2 Answers

Everytime the model method is called, you are building the computational graph of your LSTM. The second time the model method is called, tensorflow discovers that you already created variables with the same name. If the reuse flag of the scope in which the variables are created, is set to False, a ValueError is raised.

To solve this problem you have to set the reuse flag to True by calling tf.get_variable_scope().reuse_variables() at the end of your loop.

Note that you can't add this in the beginning of your loop, because then you are trying to reuse variables that have not yet been created.

You find more info in the tensorflow docs here

like image 160
GeertH Avatar answered Nov 04 '22 14:11

GeertH


You define some variables in the "model" function. Try this when you want to call "model" function multiple times:

with tf.variable_scope("model_fn") as scope:
  train_predict, test_predict = model(input1)
with tf.variable_scope(scope, reuse=True):
  train_predict, test_predict = model(input2)
like image 27
Hong Guan Avatar answered Nov 04 '22 14:11

Hong Guan