Now I am using tensorflow to write a program to validate models. I use the FIFOQueue to queue the input data. For example, I have 50,000 images and enqueue 100 images at a time. The program works beautifully except for the final iteration. At the final iteration, it shows the error "E tensorflow/core/client/tensor_c_api.cc:485] FIFOQueue '_0_path_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: path_queue_Dequeue = QueueDequeue_class=["loc:@path_queue"], component_types=[DT_INT32, DT_BOOL, DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"]]"
I think that is because it tries to enqueue the 50,001~50,100 images but cannot achieve this. However, I don't need to enqueue these images and will not use them. How can I avoid this error?
Another question is that if I would like to use dequeue_many(100), however, the total number of images is not divisible by 100, say 45678. In this case, tensorflow will throw an error. How can I solve this?
Thanks.
Try dequeue_up_to
instead of dequeue_many
:
https://www.tensorflow.org/versions/r0.10/api_docs/python/io_ops.html
Hope that helps!
You could catch the specific error which will gracefully end training once all examples have been exhausted:
try:
while True:
# Run training Ops here...
except tf.errors.OutOfRangeError:
print('Done training -- epoch limit reached')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With