I am learning to use TensorBoard and every time I launch tensorboard I get in my terminal the message:
WARNING:tensorflow:Found more than one graph event per run. Overwriting the graph with the newest event.
I assume is because I've run the same model multiple times with the same name. I just want to run my model multiple times and be able to inspect what its doing using tensorflow. Is just re running:
tensorboard --logdir=path/to/log-directory
not the usual way to do it? Or what is the suggestion for doing this type of work when I want to run the same model multiple times with and explore different learning algorithms, step-sizes, initilization, etc. Is it really neccessary to set up a new log directory each time?
When you export the model in your graph tensorflow creates a new file with the log information. So every time you run it the new information is added in the same folder.
As tensorboard cannot differenciate one model from other it shows the warning. So yes, you should use a different log folder per iteration. Indeed, some of the examples remove the log dir before running a graph.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With