I changed from tf.train.Saver
to the SavedModel format which surprisingly means loading my model from disk is a lot slower (instead of a couple of seconds it takes minutes). Why is this and what can I do to load the model faster?
I used to do this:
# Save model saver = tf.train.Saver() save_path = saver.save(session, model_path) # Load model saver = tf.train.import_meta_graph(model_path + '.meta') saver.restore(session, model_path)
But now I do this:
# Save model builder = tf.saved_model.builder.SavedModelBuilder(model_path) builder.add_meta_graph_and_variables(session, [tf.saved_model.tag_constants.TRAINING]) builder.save() # Load model tf.saved_model.loader.load(session, [tf.saved_model.tag_constants.TRAINING], model_path)
The model restoring is done using the tf. saved_model. loader and restores the saved variables, signatures, and assets in the scope of a session.
Checkpoints capture the exact value of all parameters ( tf. Variable objects) used by a model. Checkpoints do not contain any description of the computation defined by the model and thus are typically only useful when source code that will use the saved parameter values is available.
I am by no ways an expert in Tensorflow, but if I had to take a guess as to why this is happening, I would say that:
Depending on the size of your graph, recreating everything that it contained might take some time.
Concerning the second question, as @J H said, if there are no reasons for you to use one strategy over the other, and time is of the essence, then just go with the fastest one.
what can I do to load the model faster?
Switch back to tf.train.Saver
, as your question shows no motivations for using SavedModelBuilder, and makes it clear that elapsed time matters to you. Alternatively, an MCVE that reproduced the timing issue would allow others to collaborate with you on profiling, diagnosing, and fixing any perceived performance issue.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With