Checkpoints capture the exact value of all parameters ( tf. Variable objects) used by a model. Checkpoints do not contain any description of the computation defined by the model and thus are typically only useful when source code that will use the saved parameter values is available.
The Checkpoint file is a VSAM KSDS that contains checkpoint information generated by the DTF during execution of a copy operation. The Checkpoint file consists of variable length records, one per Process that has checkpointing specified. The average record length is 256 bytes.
Restoring Models The first thing to do when restoring a TensorFlow model is to load the graph structure from the ". meta" file into the current graph. The current graph could be explored using the following command tf. get_default_graph() .
The . pb format is the protocol buffer (protobuf) format, and in Tensorflow, this format is used to hold models. Protobufs are a general way to store data by Google that is much nicer to transport, as it compacts the data more efficiently and enforces a structure to the data.
Try this:
with tf.Session() as sess:
saver = tf.train.import_meta_graph('/tmp/model.ckpt.meta')
saver.restore(sess, "/tmp/model.ckpt")
The TensorFlow save method saves three kinds of files because it stores the graph structure separately from the variable values. The .meta
file describes the saved graph structure, so you need to import it before restoring the checkpoint (otherwise it doesn't know what variables the saved checkpoint values correspond to).
Alternatively, you could do this:
# Recreate the EXACT SAME variables
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Now load the checkpoint variable values
with tf.Session() as sess:
saver = tf.train.Saver()
saver.restore(sess, "/tmp/model.ckpt")
Even though there is no file named model.ckpt
, you still refer to the saved checkpoint by that name when restoring it. From the saver.py
source code:
Users only need to interact with the user-specified prefix... instead of any physical pathname.
meta file: describes the saved graph structure, includes GraphDef, SaverDef, and so on; then apply tf.train.import_meta_graph('/tmp/model.ckpt.meta')
, will restore Saver
and Graph
.
index file: it is a string-string immutable table(tensorflow::table::Table). Each key is a name of a tensor and its value is a serialized BundleEntryProto. Each BundleEntryProto describes the metadata of a tensor: which of the "data" files contains the content of a tensor, the offset into that file, checksum, some auxiliary data, etc.
data file: it is TensorBundle collection, save the values of all variables.
I am restoring trained word embeddings from Word2Vec tensorflow tutorial.
In case you have created multiple checkpoints:
e.g. files created look like this
model.ckpt-55695.data-00000-of-00001
model.ckpt-55695.index
model.ckpt-55695.meta
try this
def restore_session(self, session):
saver = tf.train.import_meta_graph('./tmp/model.ckpt-55695.meta')
saver.restore(session, './tmp/model.ckpt-55695')
when calling restore_session():
def test_word2vec():
opts = Options()
with tf.Graph().as_default(), tf.Session() as session:
with tf.device("/cpu:0"):
model = Word2Vec(opts, session)
model.restore_session(session)
model.get_embedding("assistance")
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With