Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TensorFlow: How to ensure Tensors are in the same graph

I'm trying to get started with TensorFlow in python, building a simple feed-forward NN. I have one class that holds the network weights (variables that are updated during train, and are supposed to remain constant for runtime) and another script to train the network, which gets the training data, separates them to batches and trains the network in batches. When I try to train the network, I get an error indicating that the data tensor is not in the same graph as the NN tensors:

ValueError: Tensor("Placeholder:0", shape=(10, 5), dtype=float32) must be from the same graph as Tensor("windows/embedding/Cast:0", shape=(100232, 50), dtype=float32).

The relevant parts in the training script are:

def placeholder_inputs(batch_size, ner):
  windows_placeholder = tf.placeholder(tf.float32, shape=(batch_size, ner.windowsize))
  labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
  return windows_placeholder, labels_placeholder

with tf.Session() as sess:
  windows_placeholder, labels_placeholder = placeholder_inputs(batch_size, ner)
  logits = ner.inference(windows_placeholder)

And the relevant in the network class are:

class WindowNER(object):
def __init__(self, wv, windowsize=3, dims=[None, 100,5], reg=0.01):
    self.reg=reg
    self.windowsize=windowsize
    self.vocab_size = wv.shape[0]
    self.embedding_dim = wv.shape[1]
    with tf.name_scope("embedding"):
        self.L = tf.cast(tf.Variable(wv, trainable=True, name="L"), tf.float32)
    with tf.name_scope('hidden1'):
        self.W = tf.Variable(tf.truncated_normal([windowsize * self.embedding_dim, dims[1]],
            stddev=1.0 / math.sqrt(float(windowsize*self.embedding_dim))),
        name='weights')
        self.b1 = tf.Variable(tf.zeros([dims[1]]), name='biases')
    with tf.name_scope('output'):
        self.U = tf.Variable(tf.truncated_normal([dims[1], dims[2]], stddev = 1.0 / math.sqrt(float(dims[1]))), name='weights')
        self.b2 = tf.Variable(tf.zeros(dims[2], name='biases'))


def inference(self, windows):
    with tf.name_scope("embedding"):
        embedded_words = tf.reshape(tf.nn.embedding_lookup(self.L, windows), [windows.get_shape()[0], self.windowsize * self.embedding_dim])
    with tf.name_scope("hidden1"):
        h = tf.nn.tanh(tf.matmul(embedded_words, self.W) + self.b1)
    with tf.name_scope('output'):
        t = tf.matmul(h, self.U) + self.b2

Why are there two graphs in the first place, and how can I ensure that data placeholder tensors are in the same graph as the NN?

Thanks!!

like image 726
user616254 Avatar asked Oct 27 '16 09:10

user616254


Video Answer


2 Answers

You should be able to create all tensors under the same graph by doing something like this:

g = tf.Graph()
with g.as_default():
  windows_placeholder, labels_placeholder = placeholder_inputs(batch_size, ner)
  logits = ner.inference(windows_placeholder)

with tf.Session(graph=g) as sess:
  # Run a session etc

You can read more about graphs in TF here: https://www.tensorflow.org/versions/r0.8/api_docs/python/framework.html#Graph

like image 71
Yair Avatar answered Sep 20 '22 08:09

Yair


Sometimes when you get an error like this, the error (which often can be use of a wrong variable from a different graph) could have happened much earlier, and propagated to the operation which finally threw an error. Hence, you might investigate only that line and conclude that the tensors should be from the same graph, while the error actually lies somewhere else.

The easiest way to check is to print out which graph is used for each variable/op in the graph. You can do so simply by:

print(variable_name.graph)
like image 40
Bruno KM Avatar answered Sep 22 '22 08:09

Bruno KM