Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

restore Tensorflow model without extracting from directory

I'm currently saving and restoring neural network models using Tensorflow's Saver class, as shown below:

saver.save(sess, checkpoint_prefix, global_step=step)

saver.restore(sess, checkpoint_file)

This saves .ckpt files of the model to a specified path. Because I am running multiple experiments, I have limited space to save these models.

I would like to know if there is a way to save these models without saving content in specified directories.

Ex. can I just pass some object at the last checkpoint to some evaluate() function and restore the model from that object?

So far as I see, the save_path parameter in tf.train.Saver.restore() is not optional.

Any insight would be much appreciated.

Thanks

like image 737
haxtar Avatar asked Oct 09 '18 19:10

haxtar


People also ask

How do I save and restore a TensorFlow model?

To save and restore your variables, all you need to do is to call the tf. train. Saver() at the end of you graph. This will create 3 files ( data , index , meta ) with a suffix of the step you saved your model.

Can we save TensorFlow model with pickle?

You need to extract weights from the model, create an array out of them and pickle that array.

How do I save a whole model in TensorFlow?

Call tf. keras. Model. save to save a model's architecture, weights, and training configuration in a single file/folder .


1 Answers

You can use the loaded graph and weights to evaluate in the same way that you train. You just need to change the input to be from your evaluation set. Here is some pseudo code of a train loop with an evaluation loop every 1000 iterations (assumes you have created a tf.Session called sess):

x = tf.placeholder(...)
loss, train_step = model(x)
for i in range(num_step):
    input_x = get_train_data(i)
    sess.run(train_step, feed_dict={x: input_x})
    if i % 1000 == 0 and i != 0:
        eval_loss = 0
        for j in range(num_eval):
            input_x = get_eval_data(j)
            eval_loss += sess.run(loss, feed_dict={x: input_x})
        print(eval_loss/num_eval)

If you're using tf.data for your input then you can just create a tf.cond to select which input to use:

is_training = tf.placeholder(tf.bool)
next_element = tf.cond(is_training,
                        lambda: get_next_train(),
                        lambda: get_next_eval())

get_next_train and get_next_eval would have to create all ops that are used for reading the dataset, or else there will be side affects of running the above code.

This way you don't have to save anything to disc if you don't want to.

like image 148
McAngus Avatar answered Oct 21 '22 07:10

McAngus