tqdm flushes a lot in distributed training setting (torch.distributed.run) Is there a way to only display the bar from master node?
You can switch it off by setting disable=True parameter for non-master processes, for example:
# ...
master_process = ddp_rank == 0
# ...
for epoch in range(epoch_num):
with tqdm(dataloader, disable=not master_process) as pbar:
# ...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With