Distributed training hanging

Python: 3.9.12
Pytorch: ‘1.11.0+cu102’
Pytorch-lightning: 1.6.1
guildai: 0.8.0

I’m trying to run distributed training (DDP) on my 1 machine with 4 GPUs. It works fine if I just run it normally with python. However, when I run with guild, it hangs.

INFO: [pytorch_lightning.utilities.distributed] Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/4
INFO: [torch.distributed.distributed_c10d] Added key: store_based_barrier_key:1 to store for rank: 0
INFO: [torch.distributed.distributed_c10d] Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=1, timeout=0:30:00)
INFO: [torch.distributed.distributed_c10d] Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=1, timeout=0:30:00)
INFO: [torch.distributed.distributed_c10d] Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=1, timeout=0:30:00)
INFO: [torch.distributed.distributed_c10d] Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=1, timeout=0:30:00)
INFO: [torch.distributed.distributed_c10d] Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=1, timeout=0:30:00)
INFO: [torch.distributed.distributed_c10d] Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=1, timeout=0:30:00)
...
Traceback (most recent call last):
  File "/home/davina/miniconda3/envs/ap/.guild/runs/896d6a07fe044bb5a40b01c4e6a4064f/.guild/sourcecode/autopopulus/main.py", line 223, in <module>
    main()
  File "/home/davina/miniconda3/envs/ap/.guild/runs/896d6a07fe044bb5a40b01c4e6a4064f/.guild/sourcecode/autopopulus/main.py", line 81, in main
    results = get_imputation_logic(args)(args, data)
  File "/home/davina/miniconda3/envs/ap/.guild/runs/896d6a07fe044bb5a40b01c4e6a4064f/.guild/sourcecode/autopopulus/task_logic/ae_imputation.py", line 101, in ae_imputation_logic
    ae_imputer = create_autoencoder_with_tuning(args, data, settings)
  File "/home/davina/miniconda3/envs/ap/.guild/runs/896d6a07fe044bb5a40b01c4e6a4064f/.guild/sourcecode/autopopulus/utils/tuner.py", line 43, in create_autoencoder_with_tuning
    ae_imputer.fit(data)
  File "/home/davina/miniconda3/envs/ap/.guild/runs/896d6a07fe044bb5a40b01c4e6a4064f/.guild/sourcecode/autopopulus/models/ap.py", line 128, in fit
    self._fit(data.longitudinal, "longitudinal")
  File "/home/davina/miniconda3/envs/ap/.guild/runs/896d6a07fe044bb5a40b01c4e6a4064f/.guild/sourcecode/autopopulus/models/ap.py", line 139, in _fit
    self.trainer[longitudinal_or_static].fit(ae, datamodule=data)
  File "/home/davina/miniconda3/envs/ap/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 768, in fit
    self._call_and_handle_interrupt(
  File "/home/davina/miniconda3/envs/ap/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 719, in _call_and_handle_interrupt
    return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
  File "/home/davina/miniconda3/envs/ap/lib/python3.9/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 93, in launch
    return function(*args, **kwargs)
  File "/home/davina/miniconda3/envs/ap/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/davina/miniconda3/envs/ap/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1169, in _run
    self.strategy.setup_environment()
  File "/home/davina/miniconda3/envs/ap/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 151, in setup_environment
    self.setup_distributed()
  File "/home/davina/miniconda3/envs/ap/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 191, in setup_distributed
    init_dist_connection(self.cluster_environment, self._process_group_backend)
  File "/home/davina/miniconda3/envs/ap/lib/python3.9/site-packages/pytorch_lightning/utilities/distributed.py", line 354, in init_dist_connection
    torch.distributed.init_process_group(torch_distributed_backend, rank=global_rank, world_size=world_size, **kwargs)
  File "/home/davina/miniconda3/envs/ap/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 627, in init_process_group
    _store_based_barrier(rank, store, timeout)
  File "/home/davina/miniconda3/envs/ap/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 255, in _store_based_barrier
    raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=4, worker_count=1, timeout=0:30:00)

Some related errors:

Do you have a short example I can try running to reproduce this?

I know I’m a million months behind on this, my project got sidetracked, but I’m working on a MWE.

1 Like