convert
User-facing PEFT API for LoRA module conversion and adapter management.
Functions
Disable LoRA adapters in the model. |
|
Enable LoRA adapters in the model. |
|
Check if the model has been converted to PEFT/LoRA model. |
|
Update model with PEFT/LoRA adapters. |
- disable_adapters(model, layers_to_disable=None, adapters_to_disable=None)
Disable LoRA adapters in the model.
- Parameters:
model – Model with LoRA adapters
layers_to_disable – Optional list of layer name patterns (wildcards or callables) to disable adapters on. If None, disables on all layers.
adapters_to_disable – Optional list of adapter name patterns (wildcards) to disable. If None, disables all adapters.
Examples
# Disable all adapters disable_adapters(model)
# Disable adapters only on attention layers disable_adapters(model, layers_to_disable=[”attention”])
# Disable only “default” adapters disable_adapters(model, adapters_to_disable=[”default”])
# Disable “default” adapters on attention layers only disable_adapters(model, layers_to_disable=[”attention”], adapters_to_disable=[”default”])
- enable_adapters(model, layers_to_enable=None, adapters_to_enable=None)
Enable LoRA adapters in the model.
- Parameters:
model – Model with LoRA adapters
layers_to_enable – Optional list of layer name patterns (wildcards or callables) to enable adapters on. If None, enables on all layers.
adapters_to_enable – Optional list of adapter name patterns (wildcards) to enable. If None, enables all adapters.
Examples
# Enable all adapters enable_adapters(model)
# Enable adapters only on MLP layers enable_adapters(model, layers_to_enable=[”mlp”])
# Enable only “finetuned” adapters enable_adapters(model, adapters_to_enable=[”finetuned”])
# Enable “finetuned” adapters on MLP layers only enable_adapters(model, layers_to_enable=[”mlp”], adapters_to_enable=[”finetuned”])
- is_peft_model(model)
Check if the model has been converted to PEFT/LoRA model.
This function checks if any modules in the model are LoRAModule instances, which indicates the model has already been converted to PEFT mode.
- Parameters:
model (Module) – The model to check
- Returns:
True if the model contains LoRA modules, False otherwise
- Return type:
bool
- update_model(model, config)
Update model with PEFT/LoRA adapters.
This function handles both initial PEFT conversion and adding additional adapters: - First call: Converts modules to LoRAModules and adds the first adapter - Subsequent calls: Adds new adapters to existing LoRAModules
- Parameters:
model (Module) – The model to update
config (dict[str, Any] | PEFTConfig) – PEFT configuration dict or PEFTConfig instance
- Returns:
The updated model with LoRA adapters