conversion

PEFT conversion and restore utilities for LoRA modules.

Functions

freeze_lora_weights

Freeze LoRA adapter weights to prevent gradient updates during training.

replace_lora_module

Replace modules with LoRA modules.

sync_lora_weights

Broadcast LoRA adapter weights from src rank 0 to all other ranks in the group.

freeze_lora_weights(model, *, layer_patterns=None, adapter_patterns=None)

Freeze LoRA adapter weights to prevent gradient updates during training.

This function sets requires_grad=False for LoRA adapter parameters (lora_a and lora_b). Useful when you want to train only the base model weights or evaluate the model without updating LoRA adapters.

Parameters:
  • model – Model containing LoRA modules whose adapter weights should be frozen

  • layer_patterns – Optional patterns (str, bytes, or Iterable) to match specific layer names. If provided, only layers matching these patterns will be affected. Supports Unix-style wildcards (e.g., “.linear”, “transformer.”)

  • adapter_patterns – Optional patterns (str or Iterable) to match specific adapter names. If provided, only adapters matching these patterns will be affected. Supports Unix-style wildcards

replace_lora_module(model, version=None, config=None, registry=<modelopt.torch.opt.dynamic._DMRegistryCls object>)

Replace modules with LoRA modules.

Parameters:
sync_lora_weights(model, group=None)

Broadcast LoRA adapter weights from src rank 0 to all other ranks in the group.

This ensures LoRA weights are identical across data-parallel replicas after random initialization. Should be called after LoRA adapters are added to the model.

Parameters:
  • model – Model containing LoRA modules to synchronize.

  • group – The process group to broadcast over (e.g., the data-parallel group). If None, uses the default process group.