Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pytorch "NCCL error": unhandled system error, NCCL version 2.4.8"

Tags:

python

pytorch

I use pytorch to distributed training my model.I have two nodes and two gpu for each node, and I run the code for one node:

python train_net.py  --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml  --num-gpu 2  --num-machines 2 --machine-rank 0 --dist-url tcp://192.168.**.***:8000

and the other:

python train_net.py  --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml  --num-gpu 2  --num-machines 2 --machine-rank 1 --dist-url tcp://192.168.**.***:8000

However the other has RuntimeError problem

global_rank 3 machine_rank 1 num_gpus_per_machine 2 local_rank 1
global_rank 2 machine_rank 1 num_gpus_per_machine 2 local_rank 0
Traceback (most recent call last):
  File "train_net.py", line 109, in <module>
    args=(args,),
  File "/root/detectron2_repo/detectron2/engine/launch.py", line 49, in launch
    daemon=False,
  File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
    while not spawn_context.join():
  File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 118, in join
    raise Exception(msg)
Exception:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
    fn(i, *args)
  File "/root/detectron2_repo/detectron2/engine/launch.py", line 72, in _distributed_worker
    comm.synchronize()
  File "/root/detectron2_repo/detectron2/utils/comm.py", line 79, in synchronize
    dist.barrier()
  File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1489, in barrier
    work = _default_pg.barrier()
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:410, unhandled system error, NCCL version 2.4.8

IF I change mask-rank = 1 to mask-rank = 0, then no error will be reported, but can't distributed training,Does anyone know why this error may occur?

like image 905
ZFS Avatar asked Apr 07 '20 08:04

ZFS


1 Answers

A number of things can cause this issue, see for example 1, 2. Adding the line

import os
os.environ["NCCL_DEBUG"] = "INFO"

to your script will log more specific debug info leading up to the error, giving you a more helpful error message to google.

like image 138
Jacob Stern Avatar answered Nov 08 '22 07:11

Jacob Stern