config
Configuration classes for PEFT methods.
- ModeloptConfig ExportPEFTConfig
Bases:
ModeloptBaseConfig
An empty config.
Show default config as JSON
- Default config (JSON):
{}
- ModeloptConfig PEFTAttributeConfig
Bases:
ModeloptBaseConfig
Configuration for PEFT adapter attributes.
Show default config as JSON
- Default config (JSON):
{ "enable": true, "rank": 64, "scale": 1.0, "lora_a_init": null, "lora_b_init": null }
- field enable: bool
Show details
If True, enables the adapter. If False, by-passes the adapter.
- field lora_a_init: <lambda>, return_type=str, when_used=always)]
Show details
Initializer from
torch.nn.init
(in-place; name ends with\_
).- Constraints:
json_schema = {‘type’: ‘string’, ‘title’: ‘torch initializer’, ‘description’: ‘Fully-qualified callable from
torch.nn.init
. Must be in-place (name ends with\\_
).’, ‘examples’: [‘torch.nn.init.zeros\_’, ‘torch.nn.init.kaiming_uniform\_’]}func = <function <lambda> at 0x7fb32501fba0>
return_type = <class ‘str’>
when_used = always
- field lora_b_init: <lambda>, return_type=str, when_used=always)]
Show details
Initializer from
torch.nn.init
(in-place; name ends with\_
).- Constraints:
json_schema = {‘type’: ‘string’, ‘title’: ‘torch initializer’, ‘description’: ‘Fully-qualified callable from
torch.nn.init
. Must be in-place (name ends with\\_
).’, ‘examples’: [‘torch.nn.init.zeros\_’, ‘torch.nn.init.kaiming_uniform\_’]}func = <function <lambda> at 0x7fb32501fba0>
return_type = <class ‘str’>
when_used = always
- field rank: int
Show details
The rank (dimension) of the LoRA matrices. Higher rank allows more expressiveness but uses more memory.
- field scale: float
Show details
Scaling factor for the LoRA output. Controls the magnitude of the adaptation.
- ModeloptConfig PEFTConfig
Bases:
ModeloptBaseConfig
Default configuration for
peft
mode.For adapter_cfg, later patterns override earlier ones, for example:
"adapter_cfg": { "*": { "rank": 32, "scale": 1, "enable": True, }, "*output_layer*": {"enable": False}, }
If a layer name matches
"*output_layer*"
, the attributes will be replaced with{"enable": False}
.Show default config as JSON
- Default config (JSON):
{ "adapter_name": "default", "adapter_cfg": { "*": { "rank": 64 } }, "adapter_type": "lora", "freeze_base_model": true, "freeze_lora_weights": false }
- field adapter_cfg: dict[str | Callable, PEFTAttributeConfig | dict]
Show details
Configuration for adapters. Maps module patterns to PEFTAttributeConfig or dict.
- field adapter_name: str
Show details
Name of the adapter to create or update.
- field adapter_type: str
Show details
Type of PEFT adapter to use. Currently only ‘lora’ is supported.
- field freeze_base_model: bool
Show details
Whether to freeze the base model weights; in most cases, this should be set to True.
- field freeze_lora_weights: bool
Show details
Whether to freeze the lora model weights; in most cases, this should be set to False.