Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TensorFlow Dataset.shuffle - large dataset [duplicate]

I'm using TensorFlow 1.2 with a dataset in a 20G TFRecord file. There is about half a million samples in that TFRecord file.

Looks like if I choose a value smaller than the amount of records in the dataset for buffer_size, only the first N records in the TFRecord will be used. https://www.tensorflow.org/api_docs/python/tf/contrib/data/Dataset#shuffle

For example, if buffer_size = 100, seems like only the first 100 records are ever used.

Question

Should buffer_size always be the length of the dataset? Would that impact training performance?

like image 760
rodrigo-silveira Avatar asked Dec 12 '17 20:12

rodrigo-silveira


1 Answers

No matter what buffer size you will choose, all samples will be used, it only affects the randomness of the shuffle.

If buffer size is 100, it means that Tensorflow will keep a buffer of the next 100 samples, and will randomly select one those 100 samples. it then adds the next element to the buffer.

so, if buffer_size = 1 there is no shuffle at all, and if buffer_size > data_set_size a perfect uniform random shuffle is guaranteed.

I would highly suggest to shuffle the data set before creating the TFrecords, and keep a small buffer size.

like image 151
Matan Hugi Avatar answered Nov 09 '22 09:11

Matan Hugi