I'm following Sentdex's DQN tutorial. I'm stuck trying to rewrite custom TensorBoard in TF 2.0. The point is to add **stats to a file, for example: {'reward_avg': -99.0, 'reward_min': -200, 'reward_max': 2, 'epsilon': 1}
Original code:
class ModifiedTensorBoard(TensorBoard):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.step = 1
self.writer = tf.summary.FileWriter(self.log_dir)
# Custom method for saving own metrics
# Creates writer, writes custom metrics and closes writer
def update_stats(self, **stats):
self._write_logs(stats, self.step)
My attempt:
def update_stats(self, **stats):
for name, value in stats.items():
with self.writer.as_default():
tf.summary.scalar(name, value, self.step)
This way I'm getting: TypeError: unsupported operand type(s) for +: 'ModifiedTensorBoard' and 'list'
If you're new to TensorBoard, see the get started doc instead. If you are using tf.keras there may be no action you need to take to upgrade to TensorFlow 2.x. TensorFlow 2.x includes significant changes to the tf.summary API used to write summary data for visualization in TensorBoard.
TensorBoard: TensorFlow's visualization toolkit TensorBoard provides the visualization and tooling needed for machine learning experimentation: Tracking and visualizing metrics such as loss and accuracy Visualizing the model graph (ops and layers)
This also includes new additions such as NVIDIA’s Jetson TX2 and Intel’s Movidius chips. The main API is now non-other than the Keras: The fluid layer of Keras is now integrated on top of the raw TensorFlow code make it simple and easy to use. This would help bring a lot of progress and productivity in the field of Machine Learning and AI.
In TensorFlow 1.x, asummary is a unique type of object that allows us to write data to TensorBoard. In TensorFlow 2.x, summaries are much simpler and automatically write data to TensorBoard upon calling them.
I followed the same tutorial, here's what I did to make it work:
Here's the ModifiedTensorBoard Class:
class ModifiedTensorBoard(TensorBoard):
# Overriding init to set initial step and writer (we want one log file for all .fit() calls)
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.step = 1
self.writer = tf.summary.create_file_writer(self.log_dir)
self._log_write_dir = os.path.join(self.log_dir, MODEL_NAME)
# Overriding this method to stop creating default log writer
def set_model(self, model):
pass
# Overrided, saves logs with our step number
# (otherwise every .fit() will start writing from 0th step)
def on_epoch_end(self, epoch, logs=None):
self.update_stats(**logs)
# Overrided
# We train for one batch only, no need to save anything at epoch end
def on_batch_end(self, batch, logs=None):
pass
# Overrided, so won't close writer
def on_train_end(self, _):
pass
def on_train_batch_end(self, batch, logs=None):
pass
# Custom method for saving own metrics
# Creates writer, writes custom metrics and closes writer
def update_stats(self, **stats):
self._write_logs(stats, self.step)
def _write_logs(self, logs, index):
with self.writer.as_default():
for name, value in logs.items():
tf.summary.scalar(name, value, step=index)
self.step += 1
self.writer.flush()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With