gigl.utils.share_memory#
- gigl.utils.share_memory.share_memory(entity: Tensor | PartitionBook | Dict[_KeyType, Tensor] | Dict[_KeyType, PartitionBook] | None) → None#
- Based on GraphLearn-for-PyTorch’s share_memory implementation, with additional support for handling empty tensors with share_memory.
Calling share_memory_() on an empty tensor may cause processes to hang, although the root cause of this is currently unknown. As a result, we opt to not move empty tensors to shared memory if they are provided.
- Args:
- entity (Optional[Union[torch.Tensor, Dict[_KeyType, torch.Tensor]]]):
Homogeneous or heterogeneous entity of tensors which is being moved to shared memory