I'm trying to define an operation for a NN I'm implementing, but to do so I need to iterate over the dimension of a tensor. I have a small working example below.
X = tf.placeholder(tf.float32, shape=[None, 10])
idx = [[i] for i in tf.range(X.get_shape()[0])]
This produces an error stating
ValueError: Cannot convert an unknown Dimension to a Tensor: ?
When using the same code but using tf.shape
instead, resulting in the code being
X = tf.placeholder(tf.float32, shape=[None, 10])
idx = [[i] for i in tf.range(tf.shape(X)[0])]
Gives the following error
TypeError: 'Tensor' object is not iterable.
The way that I'm implementing this NN, the batch_size
isn't defined until the training function, which is at the end of the code. This is just where I'm building the graph itself, so the batch_size
isn't known by this point, and it can't be fixed as the training batch_size
and the test set batch_sizes are different.
What is the best way to fix this? This is the last thing keeping my code from running, as I got it to run with a fixed batch_size
, though those results aren't useful. I've been pouring over the TensorFlow API Documentation and stack overflow for weeks to no avail.
I've also tried to feed in a placeholder into the range, so when I'm running the test/training set the code would be the following
X = tf.placeholder(tf.float32, shape=[None, 10])
bs = tf.placeholder(tf.int32)
def My_Function(X):
# Do some stuff to X
idx = [[i] for i in tf.range(bs)]
# return some tensor
A = tf.nn.relu(My_Function(X))
However, this gives the same error as above
TypeError: 'Tensor' object is not iterable.
Creates a sequence of numbers that begins at start and extends by increments of delta up to but not including limit . The dtype of the resulting tensor is inferred from the inputs unless it is provided explicitly. Like the Python builtin range , start defaults to 0, so that range(n) = range(0, n) .
Creates a tensor of all ones that has the same shape as the input.
I think you should use the tf.shape(x) instead.
x = tf.placeholder(..., shape=[None, ...])
batch_size = tf.shape(x)[0] # Returns a scalar `tf.Tensor`
print x.get_shape()[0] # ==> "?"
# You can use `batch_size` as an argument to other operators.
some_other_tensor = ...
some_other_tensor_reshaped = tf.reshape(some_other_tensor, [batch_size, 32, 32])
# To get the value, however, you need to call `Session.run()`.
sess = tf.Session()
x_val = np.random.rand(37, 100, 100)
batch_size_val = sess.run(batch_size, {x: x_val})
print x_val # ==> "37"
See : get the size of a variable batch dimension
You can't operate on tensors that way. You need to use tf.map_fn
as user1735003 mentioned.
Here is an example where I used tf.map_fn
in order to pass the output of an LSTM at each timestep into a linear layer, defined by weights['out']
and biases['out']
.
x = tf.placeholder("float", [features_dimension, None, n_timesteps])
weights = {'out': tf.Variable(tf.zeros([N_HIDDEN_LSTM, labels_dimension]))}
biases = {'out': tf.Variable(tf.zeros([labels_dimension]))}
def LSTM_model(x, weights, biases):
lstm_cell = rnn.LSTMCell(N_HIDDEN_LSTM)
# outputs is a Tensor of shape (n_timesteps, n_observations, N_HIDDEN_LSTM)
outputs, states = tf.nn.dynamic_rnn(lstm_cell, x, dtype=tf.float32, time_major=True)
# Linear activation
def pred_fn(current_output):
return tf.matmul(current_output, weights['out']) + biases['out']
# Use tf.map_fn to apply pred_fn to each tensor in outputs, along
# dimension 0 (timestep dimension)
pred = tf.map_fn(pred_fn, outputs)
return pred
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With