MegatronBridge#
This workload (test_template_name is MegatronBridge) submits training and finetuning tasks based on Megatron-Bridge framework.
Usage Examples#
Test TOML example:
name = "megatron_bridge_qwen_30b"
description = "Megatron-Bridge run via CloudAI SlurmSystem for Qwen3 30B A3B"
test_template_name = "MegatronBridge"
[cmd_args]
# Container can be an NGC/enroot URL (nvcr.io#...) or a local .sqsh path.
container_image = "nvcr.io#nvidia/nemo:25.11.01"
model_name = "qwen3"
model_size = "30b_a3b"
task = "pretrain"
domain = "llm"
compute_dtype = "fp8_mx"
hf_token = "hf_xxx"
Test Scenario example:
name = "megatron_bridge_qwen_30b"
[[Tests]]
id = "megatron_bridge_qwen_30b"
test_name = "megatron_bridge_qwen_30b"
num_nodes = "2"
Test-in-Scenario example:
name = "megatron-bridge-test"
[[Tests]]
id = "mbridge.1"
num_nodes = 2
time_limit = "00:30:00"
name = "megatron_bridge_qwen_30b"
description = "Megatron-Bridge run via CloudAI SlurmSystem for Qwen3 30B A3B"
test_template_name = "MegatronBridge"
[Tests.cmd_args]
container_image = "nvcr.io#nvidia/nemo:25.11.01"
model_name = "qwen3"
model_size = "30b_a3b"
task = "pretrain"
domain = "llm"
compute_dtype = "fp8_mx"
hf_token = "hf_xxx"
API Documentation#
Command Arguments#
- class cloudai.workloads.megatron_bridge.megatron_bridge.MegatronBridgeCmdArgs(
- *,
- gpu_type: str = 'gb200',
- log_dir: str = '',
- time_limit: str = '00:30:00',
- container_image: str = '',
- num_gpus: int = 8,
- gpus_per_node: int = 8,
- custom_mounts: str | None = None,
- enable_vboost: bool | None = False,
- dryrun: bool | None = False,
- enable_nsys: bool | None = False,
- detach: bool | None = None,
- model_name: Annotated[str, MinLen(min_length=1)],
- model_size: Annotated[str, MinLen(min_length=1)],
- domain: str = 'llm',
- task: str = 'pretrain',
- compute_dtype: str = 'bf16',
- fp8_recipe: str | None = None,
- hf_token: str | None = None,
- nemo_home: str | None = None,
- wandb_key: str | None = None,
- wandb_prj_name: str | None = None,
- wandb_exp_name: str | None = None,
- use_tokendrop: bool | List[bool] | None = None,
- use_megatron_fsdp: bool | List[bool] | None = None,
- cuda_graph_impl: str | None = None,
- cuda_graph_scope: str | List[str] | None = None,
- tp: int | List[int] | None = None,
- pp: int | List[int] | None = None,
- cp: int | List[int] | None = None,
- vp: int | List[int] | None = None,
- ep: int | List[int] | None = None,
- et: int | List[int] | None = None,
- mb: int | List[int] | None = None,
- gb: int | List[int] | None = None,
- moe_a2a_overlap: bool | List[bool] | None = None,
- max_steps: int | None = 50,
- recompute_num_layers: int | List[int] | None = None,
- activation_offload_layers: int | List[int] | None = None,
- recompute_modules: str | List[str] | None = None,
- num_distributed_optimizer_instances: int | None = None,
- **extra_data: Any,
Bases:
CmdArgsMegatron-Bridge launcher arguments (translated into setup_experiment.py flags).
- gpu_type: str#
- log_dir: str#
- time_limit: str#
- container_image: str#
- num_gpus: int#
- gpus_per_node: int#
- custom_mounts: str | None#
- enable_vboost: bool | None#
- dryrun: bool | None#
- enable_nsys: bool | None#
- detach: bool | None#
- domain: str#
- task: str#
- compute_dtype: str#
- fp8_recipe: str | None#
- hf_token: str | None#
- nemo_home: str | None#
- wandb_key: str | None#
- wandb_prj_name: str | None#
- wandb_exp_name: str | None#
- use_tokendrop: bool | List[bool] | None#
- use_megatron_fsdp: bool | List[bool] | None#
- cuda_graph_impl: str | None#
- cuda_graph_scope: str | List[str] | None#
- tp: int | List[int] | None#
- pp: int | List[int] | None#
- cp: int | List[int] | None#
- vp: int | List[int] | None#
- ep: int | List[int] | None#
- et: int | List[int] | None#
- mb: int | List[int] | None#
- gb: int | List[int] | None#
- moe_a2a_overlap: bool | List[bool] | None#
- max_steps: int | None#
- recompute_num_layers: int | List[int] | None#
- activation_offload_layers: int | List[int] | None#
- recompute_modules: str | List[str] | None#
- num_distributed_optimizer_instances: int | None#
Test Definition#
- class cloudai.workloads.megatron_bridge.megatron_bridge.MegatronBridgeTestDefinition(*, name: str, description: str, test_template_name: str, cmd_args: ~cloudai.workloads.megatron_bridge.megatron_bridge.MegatronBridgeCmdArgs, extra_env_vars: dict[str, str | ~typing.List[str]] = {}, extra_cmd_args: dict[str, str] = {}, extra_container_mounts: list[str] = [], git_repos: list[~cloudai._core.installables.GitRepo] = [], nsys: ~cloudai.models.workload.NsysConfiguration | None = None, predictor: ~cloudai.models.workload.PredictorConfig | None = None, agent: str = 'grid_search', agent_steps: int = 1, agent_metrics: list[str] = ['default'], agent_reward_function: str = 'inverse', nemo_run_repo: ~cloudai._core.installables.GitRepo = GitRepo(url=https://github.com/NVIDIA-NeMo/Run.git, commit=main))[source]#
Bases:
TestDefinitionMegatron-Bridge test definition (CloudAI-managed install + Slurm submission via launcher).
- cmd_args: MegatronBridgeCmdArgs#
- nemo_run_repo: GitRepo#
- classmethod validate_git_repos_has_megatron_bridge_repo(
- v: list[GitRepo],
MegatronBridge requires users to pin the Megatron-Bridge repo version via [[git_repos]].
- property docker_image: DockerImage#
- property python_executable: PythonExecutable#
- property megatron_bridge_repo: GitRepo#
- property installables: list[Installable]#