I am using a queue to feed my training examples to my network using the code below, and it works properly.
However, I would like to be able to feed some testing data every n iterations, but I don't really know how I should proceed. Should I stop momentarily the queue and feed the testing data manually? Should I create another queue just for testing data?
Edit: Is the right way of doing it is to create a separate file, say eval.py
, that continuously reads the last checkpoint and evaluates the network? This is how they do it in the CIFAR10 example.
batch = 128 # size of the batch
x = tf.placeholder("float32", [None, n_steps, n_input])
y = tf.placeholder("float32", [None, n_classes])
queue = tf.RandomShuffleQueue(capacity=4*batch,
min_after_dequeue=3*batch,
dtypes=[tf.float32, tf.float32],
shapes=[[n_steps, n_input], [n_classes]])
enqueue_op = queue.enqueue_many([x, y])
X_batch, Y_batch = queue.dequeue_many(batch)
sess = tf.Session()
def load_and_enqueue(data):
while True:
X, Y = data.get_next_batch(batch)
sess.run(enqueue_op, feed_dict={x: X, y: Y})
train_thread = threading.Thread(target=load_and_enqueue, args=(data))
train_thread.daemon = True
train_thread.start()
for _ in xrange(max_iter):
sess.run(train_op)
You can bulid another test Queue and a copy of training model as test model like this:
trainX, trainY = Queue0(batchSize, ...)...
testX, testY= Queue1(batchSize, ...)...
modelTrain = inference(trainX, trainY, ...)
# reuse variables
modelTest = inference(testX, testY, ...)
sess.run(train_op,loss_op,trainX,trainY)
sess.run(test_op,testX,testY)
This way may consume more memory since 2 models are initialized, hope to see better solution
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With