In the following code, the None
is used to declare the size of the placeholders.
x_data = tf.placeholder(tf.int32, [None, max_sequence_length])
y_output = tf.placeholder(tf.int32, [None])
As I know, this None
is used to specify a variable batch dimension. But, in each code, we have a variable that shows the batch size, such as:
batch_size = 250
So, is there any reason to use None
in such cases instead of simply declaring the placeholders as?
x_data = tf.placeholder(tf.int32, [batch_size, max_sequence_length])
y_output = tf.placeholder(tf.int32, [batch_size])
It is just so that the input of the network doesn't get bounded to a fixed-sized batches, and you can later reuse the learnt network to predict either single instances or arbitrarily long batches (e.g. predict all your test samples at once).
In other words, it doesn't do much during training, as batches are usually of a fixed size during tranning anyway, but it makes the network more useful when testing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With