Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why use None for the batch dimension in tensorflow?

Tags:

In the following code, the None is used to declare the size of the placeholders.

x_data = tf.placeholder(tf.int32, [None, max_sequence_length]) 
y_output = tf.placeholder(tf.int32, [None])

As I know, this None is used to specify a variable batch dimension. But, in each code, we have a variable that shows the batch size, such as:

batch_size = 250

So, is there any reason to use None in such cases instead of simply declaring the placeholders as?

x_data = tf.placeholder(tf.int32, [batch_size, max_sequence_length]) 
y_output = tf.placeholder(tf.int32, [batch_size])
like image 723
Hossein Avatar asked Jun 09 '17 15:06

Hossein


1 Answers

It is just so that the input of the network doesn't get bounded to a fixed-sized batches, and you can later reuse the learnt network to predict either single instances or arbitrarily long batches (e.g. predict all your test samples at once).

In other words, it doesn't do much during training, as batches are usually of a fixed size during tranning anyway, but it makes the network more useful when testing.

like image 104
Imanol Luengo Avatar answered Nov 15 '22 06:11

Imanol Luengo