I'm using a custom tf. Estimator
object to train a neural network. The problem is in the size of the events file after training - it is unreasonably large.
I've already solved the problem with saving part of a dataset as constant by using tf.Dataset.from_generator()
.
However, the size is still quite large and while starting tensorboard
I'm getting the message
W0225 10:38:07.443567 140693578311424 tf_logging.py:120] Found more than one metagraph event per run. Overwriting the metagraph with the newest event.
So, I suppose, that I'm creating and saving many different graphs in this event file. Is it possible to turn off this saving or how to save an only first copy?
For know, I found only the way to delete all the default logs by deleting the events filts with
list(map(os.remove, glob.glob(os.path.join(runtime_params['model_dir'], 'events.out.tfevents*'))))
However, it is a bad solution for me, as I would prefer to keep the summaries and, ideally, one copy of the graph.
From the documentation, I can see that
Estimators automatically write the following to disk:
You need to use the TensorBoard tool for visualizing the contents of your summary logs.
The event file log can be read and use it. You can see the example from this link provides information about how to read events written to an event file.
# This example supposes that the events file contains summaries with a
# summary value tag 'loss'. These could have been added by calling
# `add_summary()`, passing the output of a scalar summary op created with
# with: `tf.compat.v1.summary.scalar('loss', loss_tensor)`.
for e in tf.compat.v1.train.summary_iterator(path to events file):
for v in e.summary.value:
if v.tag == 'loss':
print(v.simple_value)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With