Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow: Attempting to use uninitialized value beta1_power

I got the following error when I try to run the code at the end of the post. But it is not clear to me what is wrong with my code. Could anybody let me know the tricks in debugging a tensorflow program?

$ ./main.py 
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
2017-12-11 22:53:16.061163: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Traceback (most recent call last):
  File "./main.py", line 55, in <module>
    sess.run(opt, feed_dict={x: batch_x, y: batch_y})
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 889, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1120, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
    options, run_metadata)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value beta1_power
     [[Node: beta1_power/read = Identity[T=DT_FLOAT, _class=["loc:@Variable"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](beta1_power)]]

Caused by op u'beta1_power/read', defined at:
  File "./main.py", line 46, in <module>
    opt=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 353, in minimize
    name=name)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 474, in apply_gradients
    self._create_slots([_get_variable_for(v) for v in var_list])
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/adam.py", line 130, in _create_slots
    trainable=False)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1927, in variable
    caching_device=caching_device, name=name, dtype=dtype)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 213, in __init__
    constraint=constraint)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 356, in _init_from_args
    self._snapshot = array_ops.identity(self._variable, name="read")
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 125, in identity
    return gen_array_ops.identity(input, name=name)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2071, in identity
    "Identity", input=input, name=name)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value beta1_power
     [[Node: beta1_power/read = Identity[T=DT_FLOAT, _class=["loc:@Variable"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](beta1_power)]]

The code is here. It uses LSTM.

#!/usr/bin/env python
# vim: set noexpandtab tabstop=2 shiftwidth=2 softtabstop=-1 fileencoding=utf-8:

import tensorflow as tf
from tensorflow.contrib import rnn

#import mnist dataset
from tensorflow.examples.tutorials.mnist import input_data
mnist=input_data.read_data_sets("/tmp/data/", one_hot=True)

learning_rate=0.001

#defining placeholders
#input image placeholder
time_steps=28
n_input=28
x=tf.placeholder("float", [None, time_steps, n_input])

#processing the input tensor from [batch_size,n_steps,n_input] to "time_steps" number of [batch_size,n_input] tensors
input=tf.unstack(x, time_steps, 1)

#defining the network
num_units=128
lstm_layer = rnn.BasicLSTMCell(num_units, forget_bias=1)
outputs,_ = rnn.static_rnn(lstm_layer, input, dtype="float32")

#weights and biases of appropriate shape to accomplish above task
n_classes=10
out_weights=tf.Variable(tf.random_normal([num_units, n_classes]))
out_bias=tf.Variable(tf.random_normal([n_classes]))

#converting last output of dimension [batch_size,num_units] to [batch_size,n_classes] by out_weight multiplication
prediction=tf.matmul(outputs[-1], out_weights) + out_bias

y=tf.placeholder("float", [None, n_classes])
loss=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
#optimization

#model evaluation
correct_prediction=tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy=tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

#initialize variables
init=tf.global_variables_initializer()
batch_size=128
opt=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
with tf.Session() as sess:
    sess.run(init)
    iter=1
    while iter<800:
        batch_x, batch_y = mnist.train.next_batch(batch_size=batch_size)

        batch_x=batch_x.reshape((batch_size, time_steps, n_input))

        sess.run(opt, feed_dict={x: batch_x, y: batch_y})

        if iter %10==0:
            acc=sess.run(accuracy,feed_dict={x:batch_x,y:batch_y})
            los=sess.run(loss,feed_dict={x:batch_x,y:batch_y})
            print("For iter ",iter)
            print("Accuracy ",acc)
            print("Loss ",los)
            print("__________________")

        iter=iter+1


#calculating test accuracy
test_data = mnist.test.images[:128].reshape((-1, time_steps, n_input))
test_label = mnist.test.labels[:128]
print("Testing Accuracy:", sess.run(accuracy, feed_dict={x: test_data, y: test_label}))
like image 300
user1424739 Avatar asked Dec 12 '17 04:12

user1424739


1 Answers

Change the order of these two lines:

opt=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
init=tf.global_variables_initializer()

Since AdamOptimizer has it's own variables, you should define the initilizer init after opt, not before.

like image 51
Maxim Avatar answered Oct 12 '22 15:10

Maxim