I have a variable batch size, so all of my inputs are of the form
tf.placeholder(tf.float32, shape=(None, ...)
to accept the variable batch sizes. However, how might you create a constant value with variable batch size? The issue is with this line:
log_probs = tf.constant(0.0, dtype=tf.float32, shape=[None, 1])
It is giving me an error:
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
I'm sure it is possible to initialize a constant tensor with variable batch size, how might I do so?
I've also tried the following:
tf.constant(0.0, dtype=tf.float32, shape=[-1, 1])
I get this error:
ValueError: Too many elements provided. Needed at most -1, but received 1
In TensorFlow the differences between constants and variables are that when you declare some constant, its value can't be changed in the future (also the initialization should be with a value, not with operation). Nevertheless, when you declare a Variable, you can change its value in the future with tf.
constant() is used to create a Tensor from tensor like objects like list. Syntax: tensorflow.constant( value, dtype, shape, name ) Parameters: value: It is the value that needed to be converted to Tensor. dtype(optional): It defines the type of the output Tensor.
Tensorflow variables represent the tensors whose values can be changed by running operations on them. The assign() is the method available in the Variable class which is used to assign the new tf. Tensor to the variable. The new value must have the same shape and dtype as the old Variable value.
A tf.constant()
has fixed size and value at graph construction time, so it probably isn't the right op for your application.
If you are trying to create a tensor with a dynamic size and the same (constant) value for every element, you can use tf.fill()
and tf.shape()
to create an appropriately-shaped tensor. For example, to create a tensor t
that has the same shape as input
and the value 0.5
everywhere:
input = tf.placeholder(tf.float32, shape=(None, ...))
# `tf.shape(input)` takes the dynamic shape of `input`.
t = tf.fill(tf.shape(input), 0.5)
As Yaroslav mentions in his comment, you may also be able to use (NumPy-style) broadcasting to avoid materializing a tensor with dynamic shape. For example, if input
has shape (None, 32)
and t
has shape (1, 32)
then computing tf.mul(input, t)
will broadcast t
on the first dimension to match the shape of input
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With