I have the following code for logging the train and val loss in each epoch using WandB API. I am not sure though why I am not getting val loss and train loss in the same epoch. Any idea how that could be fixed?
wandb.log({"train loss": train_epoch_loss,
"val loss": val_epoch_loss,
"epoch": epoch})
wandb.log({"train acc": train_epoch_acc,
"val acc": val_epoch_acc,
"epoch": epoch})
wandb.log({"best val acc": best_acc, "epoch": epoch})
As you see, val loss vs epochs and train loss vs epochs are two completely separate entities while I would like to have both of them in one plot in WandB.
I work at Weights & Biases, happy to help:
2 metrics on the same chart
To plot 2 metrics on the same chart you can click on the pencil icon in the chart to edit it, and then add additional metrics to the y-axis, as shown below.
Change default x-axis
You can also change the X axis to plot against "epoch" instead of the default wandb step
. If you'd like this behaviour by default you can call wandb.define_metric
once before you start training and set the x-axis to be epoch
. See the define_metric docs for more
Logging step
One thing to be mindful of is that when you log your validation metrics, you'd like them to be logged at the same step as the train metrics. In this case you can do something like this:
metrics = {}
for step, batch in my_data:
...
train_loss = ....
metrics['train_loss'] = train_loss
if step % val_steps == 0:
val_loss = ...
metrics['val_loss'] = val_loss
wandb.log(metrics)
else:
wandb.log(metrics)
Alternatively, you can use the commit=False
argument in wandb.log
to store the metric without incrementing the wandb step. And then call wandb.log()
to increment the step when you need to.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With