gigl.common.utils.get_distributed_backend#

gigl.common.utils.torch_training.get_distributed_backend(use_cuda: bool) str | None#

Returns the distributed backend based on whether distributed training is enabled and whether CUDA is used. Args:

use_cuda (bool): Whether CUDA is used for training

Returns:

Optional[str]: The distributed backend (NCCL or GLOO) if distributed training is enabled, None otherwise