distributed
Utility functions for using torch.distributed.
Functions
Returns the distributed backend. |
|
Synchronizes all processes. |
|
Deprecated method. |
|
Deprecated method. |
|
Returns whether the distributed package is available. |
|
Returns whether the distributed package is initialized. |
|
Returns whether the current process is the master process. |
|
Returns the rank of the current process. |
|
Deprecated method. |
|
Deprecated method. |
|
Returns the number of processes. |
- backend()
Returns the distributed backend.
- Return type:
str | None
- barrier(group=None)
Synchronizes all processes.
- Return type:
None
- get_data_parallel_group()
Deprecated method.
- Return type:
None
- get_tensor_parallel_group()
Deprecated method.
- Return type:
None
- is_available()
Returns whether the distributed package is available.
- Return type:
bool
- is_initialized()
Returns whether the distributed package is initialized.
- Return type:
bool
- is_master(group=None)
Returns whether the current process is the master process.
- Return type:
bool
- rank(group=None)
Returns the rank of the current process.
- Return type:
int
- set_data_parallel_group(group)
Deprecated method.
- set_tensor_parallel_group(group)
Deprecated method.
- size(group=None)
Returns the number of processes.
- Return type:
int