distributed

Utility functions for using torch.distributed.

Functions

backend

Returns the distributed backend.

barrier

Synchronizes all processes.

get_data_parallel_group

Deprecated method.

get_tensor_parallel_group

Deprecated method.

is_available

Returns whether the distributed package is available.

is_initialized

Returns whether the distributed package is initialized.

is_master

Returns whether the current process is the master process.

rank

Returns the rank of the current process.

set_data_parallel_group

Deprecated method.

set_tensor_parallel_group

Deprecated method.

size

Returns the number of processes.

backend()

Returns the distributed backend.

Return type:

str | None

barrier(group=None)

Synchronizes all processes.

Return type:

None

get_data_parallel_group()

Deprecated method.

Return type:

None

get_tensor_parallel_group()

Deprecated method.

Return type:

None

is_available()

Returns whether the distributed package is available.

Return type:

bool

is_initialized()

Returns whether the distributed package is initialized.

Return type:

bool

is_master(group=None)

Returns whether the current process is the master process.

Return type:

bool

rank(group=None)

Returns the rank of the current process.

Return type:

int

set_data_parallel_group(group)

Deprecated method.

set_tensor_parallel_group(group)

Deprecated method.

size(group=None)

Returns the number of processes.

Return type:

int