Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Remove data from tensorboard event files to make them smaller

When I train a model for multiple days with image summary activated, my .tfevent files are huge ( > 70GiB).

I don't want to deactivate the image summary as it allows me to visualize the progress of the network during training. However, once the network is trained, I don't need those information anymore (in fact, I'm not even sure it is possible to visualize previous images with tensorboard).

I would like to be able to remove them from the event file without loosing other information like the loss curve (as it is useful to compare models together).

The solution would be to use two separate summary (one for the images and one for the loss) but I would like to know if there is a better way.

like image 773
Conchylicultor Avatar asked Mar 13 '17 22:03

Conchylicultor


People also ask

What is TensorBoard event file?

Tensorboard is a debugging and visualization tool for TensorFlow models. Any TensorFlow model can be made to serialize its graph and execution summaries into physical files called “event files” or “event log files”.

What can you do with TensorBoard?

TensorBoard is a tool for providing the measurements and visualizations needed during the machine learning workflow. It enables tracking experiment metrics like loss and accuracy, visualizing the model graph, projecting embeddings to a lower dimensional space, and much more.


2 Answers

It is sure better to save the big summaries less often as Terry has suggested, but in case you already have an event file which is huge, you can still reduce its size by deleting some of the summaries.

I have had this issue, where I have saved a lot of image summaries, which I don't need now, so I have written a script to copy the eventfile, while only leaving the scalar summaries: https://gist.github.com/serycjon/c9ad58ecc3176d87c49b69b598f4d6c6

The important stuff is:

for event in tf.train.summary_iterator(event_file_path):
    event_type = event.WhichOneof('what')
    if event_type != 'summary':
        writer.add_event(event)
    else:
        wall_time = event.wall_time
        step = event.step

        # possible types: simple_value, image, histo, audio
        filtered_values = [value for value in event.summary.value if value.HasField('simple_value')]
        summary = tf.Summary(value=filtered_values)

        filtered_event = tf.summary.Event(summary=summary,
                                          wall_time=wall_time,
                                          step=step)
        writer.add_event(filtered_event)

you can use this as a base for more complicated stuff, like leaving only every 100-th image summary, filtering based on summary tag, etc.

like image 151
serycjon Avatar answered Nov 23 '22 08:11

serycjon


If you look at the event types in the log using @serycjon's loop you'll see that the graph_def and meta_graph_def might be saved often.

I had 46 GB worth of logs that I reduced to 1.6 GB by removing all the graphs. You can leave one graph so that you can still view it in tensorboard.

like image 25
nio1814 Avatar answered Nov 23 '22 08:11

nio1814