NIXL CTPerf#
This workload (test_template_name is NixlPerftest) runs NIXL performance testing suite for comprehensive network performance evaluation.
Usage Examples#
Test TOML example:
name = "my_nixl_perftest_test"
description = "Example NIXL Perftest test"
test_template_name = "NixlPerftest"
[cmd_args]
docker_image_url = "<docker container url here>"
subtest = "sequential-ct-perftest"
num_user_requests = 1
batch_size = 1
num_prefill_nodes = 1
num_decode_nodes = 1
prefill_tp = 4
decode_tp = 4
isl_mean = 10000
isl_scale = 3000
model = "deepseek-r1-distill-llama-70b"
Test Scenario example:
name = "nixl-perftest-test"
[[Tests]]
id = "perftest.1"
num_nodes = 2
time_limit = "00:20:00"
test_name = "my_nixl_perftest_test"
Test-in-Scenario example:
name = "nixl-perftest-test"
[[Tests]]
id = "perftest.1"
num_nodes = 2
time_limit = "00:20:00"
name = "my_nixl_perftest_test"
description = "Example NIXL Perftest test"
test_template_name = "NixlPerftest"
[Tests.cmd_args]
docker_image_url = "<docker container url here>"
subtest = "sequential-ct-perftest"
num_user_requests = 100
batch_size = 1
num_prefill_nodes = 1
num_decode_nodes = 1
prefill_tp = 8
decode_tp = 8
model = "deepseek-r1-distill-llama-70b"
[Tests.extra_env_vars]
CUDA_VISIBLE_DEVICES = "$SLURM_LOCALID"
API Documentation#
Command Arguments#
- pydantic model cloudai.workloads.nixl_perftest.nixl_perftest.NixlPerftestCmdArgs[source]#
CmdArgs for NIXL Perftest.
- field subtest: Literal['sequential-ct-perftest'] [Required]#
- field perftest_script: str = '/workspace/nixl/benchmark/kvbench/main.py'#
- field matgen_script: str = '/workspace/nixl/benchmark/kvbench/test/inference_workload_matgen.py'#
- field python_executable: str = 'python'#
- field num_user_requests: int | list[int] [Required]#
- field batch_size: int | list[int] [Required]#
- field num_prefill_nodes: int | list[int] [Required]#
- field num_decode_nodes: int | list[int] [Required]#
- field isl_mean: int | list[int] | None = None#
- field isl_scale: int | list[int] | None = None#
- field prefill_tp: int | list[int] = 1#
- field prefill_pp: int | list[int] = 1#
- field prefill_cp: int | list[int] = 1#
- field decode_tp: int | list[int] = 1#
- field decode_pp: int | list[int] = 1#
- field decode_cp: int | list[int] = 1#
- field model: str | list[str] | None = None#
- field num_layers: int | None = None#
- field num_heads: int | None = None#
- field num_kv_heads: int | None = None#
- field dtype_size: int | None = None#
- field matgen_args: MatgenCmdArgs [Optional]#
- field docker_image_url: str [Required]#
URL of the Docker image to use for the benchmark.
- field etcd_path: str = 'etcd'#
Path to the etcd executable.
- field wait_etcd_for: int = 60#
Number of seconds to wait for etcd to become healthy.
- field etcd_image_url: str | None = None#
Optional URL of the Docker image to use for etcd, by default etcd will be run from the same image as the benchmark.
Test Definition#
- class cloudai.workloads.nixl_perftest.nixl_perftest.NixlPerftestTestDefinition(
- *,
- name: str,
- description: str,
- test_template_name: str,
- cmd_args: NixlPerftestCmdArgs,
- extra_env_vars: dict[str, str | List[str]] = {},
- extra_cmd_args: dict[str, str] = {},
- extra_container_mounts: list[str] = [],
- git_repos: list[GitRepo] = [],
- nsys: NsysConfiguration | None = None,
- predictor: PredictorConfig | None = None,
- agent: str = 'grid_search',
- agent_steps: int = 1,
- agent_metrics: list[str] = ['default'],
- agent_reward_function: str = 'inverse',
- agent_config: dict[str, Any] | None = None,
Bases:
NIXLBaseTestDefinition[NixlPerftestCmdArgs]TestDefinition for NixlPerftest.