conversion

PEFT conversion and restore utilities for LoRA modules.

Functions

freeze_lora_weights

Freeze LoRA adapter weights to prevent gradient updates during training.

replace_lora_module

Replace modules with LoRA modules.

freeze_lora_weights(model, *, layer_patterns=None, adapter_patterns=None)

Freeze LoRA adapter weights to prevent gradient updates during training.

This function sets requires_grad=False for LoRA adapter parameters (lora_a and lora_b). Useful when you want to train only the base model weights or evaluate the model without updating LoRA adapters.

Parameters:
  • model – Model containing LoRA modules whose adapter weights should be frozen

  • layer_patterns – Optional patterns (str, bytes, or Iterable) to match specific layer names. If provided, only layers matching these patterns will be affected. Supports Unix-style wildcards (e.g., “.linear”, “transformer.”)

  • adapter_patterns – Optional patterns (str or Iterable) to match specific adapter names. If provided, only adapters matching these patterns will be affected. Supports Unix-style wildcards

replace_lora_module(model, version=None, config=None, registry=<modelopt.torch.opt.dynamic._DMRegistryCls object>)

Replace modules with LoRA modules.

Parameters: