DeepEP Benchmark#
This workload (test_template_name is DeepEP) allows you to execute DeepEP (Deep Expert Parallelism) MoE (Mixture of Experts) benchmarks within the CloudAI framework.
Overview#
DeepEP is a benchmark for measuring the performance of MoE models with distributed expert parallelism. It supports:
Two operation modes: Standard and Low-Latency
Multiple data types: bfloat16 and FP8
Flexible network configurations: With or without NVLink
Configurable model parameters: Experts, tokens, hidden size, top-k
Performance profiling: Kineto profiler support
Usage Example#
Test TOML example (Standard Mode):
name = "deepep_standard"
description = "DeepEP MoE Benchmark - Standard Mode"
test_template_name = "DeepEP"
[cmd_args]
docker_image_url = "gitlab-master.nvidia.com/ybenabou/warehouse/deepep:dp-benchmark"
mode = "standard"
tokens = 1024
num_experts = 256
num_topk = 8
hidden_size = 7168
data_type = "bfloat16"
num_warmups = 20
num_iterations = 50
Test TOML example (Low-Latency Mode):
name = "deepep_low_latency"
description = "DeepEP MoE Benchmark - Low Latency Mode"
test_template_name = "DeepEP"
[cmd_args]
docker_image_url = "gitlab-master.nvidia.com/ybenabou/warehouse/deepep:dp-benchmark"
mode = "low_latency"
tokens = 128
num_experts = 256
num_topk = 1
hidden_size = 7168
data_type = "bfloat16"
allow_nvlink_for_low_latency = false
allow_mnnvl = false
Test Scenario example:
name = "deepep-benchmark"
[[Tests]]
id = "Tests.1"
test_name = "deepep_standard"
num_nodes = 2
time_limit = "00:30:00"
Test-in-Scenario example:
name = "deepep-benchmark"
[[Tests]]
id = "Tests.1"
num_nodes = 2
time_limit = "00:30:00"
name = "deepep_standard"
description = "DeepEP MoE Benchmark"
test_template_name = "DeepEP"
[Tests.cmd_args]
docker_image_url = "gitlab-master.nvidia.com/ybenabou/warehouse/deepep:dp-benchmark"
mode = "standard"
tokens = 1024
num_experts = 256
num_topk = 8
API Documentation#
Command Arguments#
- class cloudai.workloads.deepep.deepep.DeepEPCmdArgs(
- *,
- docker_image_url: str,
- mode: Literal['standard', 'low_latency'] = 'standard',
- tokens: int = 1024,
- num_experts: int = 256,
- num_topk: int = 8,
- hidden_size: int = 7168,
- data_type: Literal['bfloat16', 'fp8'] = 'bfloat16',
- allow_nvlink_for_low_latency: bool = False,
- allow_mnnvl: bool = False,
- round_scale: bool = False,
- use_ue8m0: bool = False,
- num_warmups: int = 20,
- num_iterations: int = 50,
- shuffle_columns: bool = False,
- use_kineto_profiler: bool = False,
- num_sms: int = 24,
- num_qps_per_rank: int = 12,
- config_file_path: str = '/tmp/config.yaml',
- results_dir: str = '/workspace/dp-benchmark/results',
- **extra_data: Any,
Bases:
CmdArgsDeepEP benchmark command arguments.
- docker_image_url: str#
- mode: Literal['standard', 'low_latency']#
- tokens: int#
- num_experts: int#
- num_topk: int#
- data_type: Literal['bfloat16', 'fp8']#
- allow_nvlink_for_low_latency: bool#
- allow_mnnvl: bool#
- round_scale: bool#
- use_ue8m0: bool#
- num_warmups: int#
- num_iterations: int#
- shuffle_columns: bool#
- use_kineto_profiler: bool#
- num_sms: int#
- num_qps_per_rank: int#
- config_file_path: str#
- results_dir: str#
Test Definition#
- class cloudai.workloads.deepep.deepep.DeepEPTestDefinition(
- *,
- name: str,
- description: str,
- test_template_name: str,
- cmd_args: DeepEPCmdArgs,
- extra_env_vars: dict[str, str | List[str]] = {},
- extra_cmd_args: dict[str, str] = {},
- extra_container_mounts: list[str] = [],
- git_repos: list[GitRepo] = [],
- nsys: NsysConfiguration | None = None,
- predictor: PredictorConfig | None = None,
- agent: str = 'grid_search',
- agent_steps: int = 1,
- agent_metrics: list[str] = ['default'],
- agent_reward_function: str = 'inverse',
Bases:
TestDefinitionTest object for DeepEP MoE benchmark.
- cmd_args: DeepEPCmdArgs#
- property docker_image: DockerImage#
- property installables: list[Installable]#
- property cmd_args_dict: dict#
Return command arguments as dict, excluding CloudAI-specific fields.