autonas

Entrypoints for AutoNAS mode.

Classes

AutoNASModeDescriptor

Class to describe the "autonas" mode.

AutoNASPatchManager

A class to handle the monkey patching of the model for automode.

EvolveSearcher

An iterative searcher that uses an evolutionary algorithm to optimize the subnet config.

ExportModeDescriptor

Class to describe the "export" mode.

IterativeSearcher

Base class for iterative search algorithms.

RandomSearcher

An iterative searcher that samples subnets randomly.

Functions

convert_autonas_searchspace

Convert search space for AutoNAS mode with correct patch manager.

convert_searchspace

Convert given model into a search space.

export_searchspace

Export a subnet configuration of the search space to a regular model.

restore_autonas_searchspace

Restore search space for AutoNAS mode with correct patch manager.

restore_export

Restore & export the subnet configuration of the search space to a regular model.

restore_searchspace

Restore a search space from the given model.

update_autonas_metadata

Update subnet config to current subnet config of model.

ModeloptConfig AutoNASConfig

Bases: ModeloptBaseRuleConfig

Configuration for the "autonas" mode.

Show default config as JSON
Default config (JSON):

{
   "nn.Conv1d": {
      "*": {
         "channel_divisor": 32,
         "channels_ratio": [
            0.5,
            0.67,
            1.0
         ],
         "kernel_size": []
      }
   },
   "nn.Conv2d": {
      "*": {
         "channel_divisor": 32,
         "channels_ratio": [
            0.5,
            0.67,
            1.0
         ],
         "kernel_size": []
      }
   },
   "nn.Conv3d": {
      "*": {
         "channel_divisor": 32,
         "channels_ratio": [
            0.5,
            0.67,
            1.0
         ],
         "kernel_size": []
      }
   },
   "nn.ConvTranspose1d": {
      "*": {
         "channel_divisor": 32,
         "channels_ratio": [
            0.5,
            0.67,
            1.0
         ],
         "kernel_size": []
      }
   },
   "nn.ConvTranspose2d": {
      "*": {
         "channel_divisor": 32,
         "channels_ratio": [
            0.5,
            0.67,
            1.0
         ],
         "kernel_size": []
      }
   },
   "nn.ConvTranspose3d": {
      "*": {
         "channel_divisor": 32,
         "channels_ratio": [
            0.5,
            0.67,
            1.0
         ],
         "kernel_size": []
      }
   },
   "nn.Linear": {
      "*": {
         "feature_divisor": 32,
         "features_ratio": [
            0.5,
            0.67,
            1.0
         ]
      }
   },
   "nn.BatchNorm1d": {
      "*": {
         "feature_divisor": 32,
         "features_ratio": [
            0.5,
            0.67,
            1.0
         ]
      }
   },
   "nn.BatchNorm2d": {
      "*": {
         "feature_divisor": 32,
         "features_ratio": [
            0.5,
            0.67,
            1.0
         ]
      }
   },
   "nn.BatchNorm3d": {
      "*": {
         "feature_divisor": 32,
         "features_ratio": [
            0.5,
            0.67,
            1.0
         ]
      }
   },
   "nn.SyncBatchNorm": {
      "*": {
         "feature_divisor": 32,
         "features_ratio": [
            0.5,
            0.67,
            1.0
         ]
      }
   },
   "nn.InstanceNorm1d": {
      "*": {
         "feature_divisor": 32,
         "features_ratio": [
            0.5,
            0.67,
            1.0
         ]
      }
   },
   "nn.InstanceNorm2d": {
      "*": {
         "feature_divisor": 32,
         "features_ratio": [
            0.5,
            0.67,
            1.0
         ]
      }
   },
   "nn.InstanceNorm3d": {
      "*": {
         "feature_divisor": 32,
         "features_ratio": [
            0.5,
            0.67,
            1.0
         ]
      }
   },
   "nn.LayerNorm": {
      "*": {
         "feature_divisor": 32,
         "features_ratio": [
            0.5,
            0.67,
            1.0
         ]
      }
   },
   "nn.GroupNorm": {
      "*": {
         "channel_divisor": 32,
         "channels_ratio": [
            0.5,
            0.67,
            1.0
         ]
      }
   },
   "nn.Sequential": {
      "*": {
         "min_depth": 0
      }
   }
}

field nn.BatchNorm1d: DynamicBatchNorm1dConfig | None | dict[str, DynamicBatchNorm1dConfig | None]

Show details

Configuration for dynamic nn.BatchNorm1d module.

If the "nn.BatchNorm1d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "features_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "feature_divisor": 32
  }
}

To deactivate any dynamic nn.BatchNorm1d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.BatchNorm1d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.BatchNorm2d: DynamicBatchNorm2dConfig | None | dict[str, DynamicBatchNorm2dConfig | None]

Show details

Configuration for dynamic nn.BatchNorm2d module.

If the "nn.BatchNorm2d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "features_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "feature_divisor": 32
  }
}

To deactivate any dynamic nn.BatchNorm2d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.BatchNorm2d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.BatchNorm3d: DynamicBatchNorm3dConfig | None | dict[str, DynamicBatchNorm3dConfig | None]

Show details

Configuration for dynamic nn.BatchNorm3d module.

If the "nn.BatchNorm3d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "features_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "feature_divisor": 32
  }
}

To deactivate any dynamic nn.BatchNorm3d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.BatchNorm3d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.Conv1d: DynamicConv1dConfig | None | dict[str, DynamicConv1dConfig | None]

Show details

Configuration for dynamic nn.Conv1d module.

If the "nn.Conv1d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "channels_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "kernel_size": [],
    "channel_divisor": 32
  }
}

To deactivate any dynamic nn.Conv1d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.Conv1d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.Conv2d: DynamicConv2dConfig | None | dict[str, DynamicConv2dConfig | None]

Show details

Configuration for dynamic nn.Conv2d module.

If the "nn.Conv2d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "channels_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "kernel_size": [],
    "channel_divisor": 32
  }
}

To deactivate any dynamic nn.Conv2d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.Conv2d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.Conv3d: DynamicConv3dConfig | None | dict[str, DynamicConv3dConfig | None]

Show details

Configuration for dynamic nn.Conv3d module.

If the "nn.Conv3d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "channels_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "kernel_size": [],
    "channel_divisor": 32
  }
}

To deactivate any dynamic nn.Conv3d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.Conv3d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.ConvTranspose1d: DynamicConvTranspose1dConfig | None | dict[str, DynamicConvTranspose1dConfig | None]

Show details

Configuration for dynamic nn.ConvTranspose1d module.

If the "nn.ConvTranspose1d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "channels_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "kernel_size": [],
    "channel_divisor": 32
  }
}

To deactivate any dynamic nn.ConvTranspose1d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.ConvTranspose1d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.ConvTranspose2d: DynamicConvTranspose2dConfig | None | dict[str, DynamicConvTranspose2dConfig | None]

Show details

Configuration for dynamic nn.ConvTranspose2d module.

If the "nn.ConvTranspose2d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "channels_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "kernel_size": [],
    "channel_divisor": 32
  }
}

To deactivate any dynamic nn.ConvTranspose2d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.ConvTranspose2d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.ConvTranspose3d: DynamicConvTranspose3dConfig | None | dict[str, DynamicConvTranspose3dConfig | None]

Show details

Configuration for dynamic nn.ConvTranspose3d module.

If the "nn.ConvTranspose3d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "channels_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "kernel_size": [],
    "channel_divisor": 32
  }
}

To deactivate any dynamic nn.ConvTranspose3d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.ConvTranspose3d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.GroupNorm: DynamicGroupNormConfig | None | dict[str, DynamicGroupNormConfig | None]

Show details

Configuration for dynamic nn.GroupNorm module.

If the "nn.GroupNorm" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "channels_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "channel_divisor": 32
  }
}

To deactivate any dynamic nn.GroupNorm module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.GroupNorm layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.InstanceNorm1d: DynamicInstanceNorm1dConfig | None | dict[str, DynamicInstanceNorm1dConfig | None]

Show details

Configuration for dynamic nn.InstanceNorm1d module.

If the "nn.InstanceNorm1d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "features_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "feature_divisor": 32
  }
}

To deactivate any dynamic nn.InstanceNorm1d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.InstanceNorm1d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.InstanceNorm2d: DynamicInstanceNorm2dConfig | None | dict[str, DynamicInstanceNorm2dConfig | None]

Show details

Configuration for dynamic nn.InstanceNorm2d module.

If the "nn.InstanceNorm2d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "features_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "feature_divisor": 32
  }
}

To deactivate any dynamic nn.InstanceNorm2d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.InstanceNorm2d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.InstanceNorm3d: DynamicInstanceNorm3dConfig | None | dict[str, DynamicInstanceNorm3dConfig | None]

Show details

Configuration for dynamic nn.InstanceNorm3d module.

If the "nn.InstanceNorm3d" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "features_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "feature_divisor": 32
  }
}

To deactivate any dynamic nn.InstanceNorm3d module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.InstanceNorm3d layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.LayerNorm: DynamicLayerNormConfig | None | dict[str, DynamicLayerNormConfig | None]

Show details

Configuration for dynamic nn.LayerNorm module.

If the "nn.LayerNorm" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "features_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "feature_divisor": 32
  }
}

To deactivate any dynamic nn.LayerNorm module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.LayerNorm layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.Linear: DynamicLinearConfig | None | dict[str, DynamicLinearConfig | None]

Show details

Configuration for dynamic nn.Linear module.

If the "nn.Linear" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "features_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "feature_divisor": 32
  }
}

To deactivate any dynamic nn.Linear module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.Linear layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.Sequential: DynamicSequentialConfig | None | dict[str, DynamicSequentialConfig | None]

Show details

Configuration for dynamic nn.Sequential module.

If the "nn.Sequential" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "min_depth": 0
  }
}

To deactivate any dynamic nn.Sequential module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.Sequential layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

field nn.SyncBatchNorm: DynamicSyncBatchNormConfig | None | dict[str, DynamicSyncBatchNormConfig | None]

Show details

Configuration for dynamic nn.SyncBatchNorm module.

If the "nn.SyncBatchNorm" key is not specified, the default configuration (shown in JSON) will be used:

{
  "*": {
    "features_ratio": [
      0.5,
      0.67,
      1.0
    ],
    "feature_divisor": 32
  }
}

To deactivate any dynamic nn.SyncBatchNorm module, use None instead of providing a dictionary {}.

To specify layer-specific configurations, you can specify a config for each submodule with the key specifying a glob pattern that matches the submodule name. For example, to convert to a dynamic module for all nn.SyncBatchNorm layers except for those in the "lm_head" submodule use:

{
    "*": {...},
    "*lm_head*": None,
}

Note that glob expressions are processed sequentially in the order they are specified. Later keys in the config will overwrite earlier keys if they match the same submodule name.

If you want to specify the same configuration for all submodules, you can provide an unnested dictionary as well:

{...}

which is short for

{
    "*": {...},
}

class AutoNASModeDescriptor

Bases: ModeDescriptor

Class to describe the "autonas" mode.

The properties of this mode can be inspected via the source code.

property config_class: type[ModeloptBaseConfig]

Specifies the config class for the mode.

property convert: Callable[[Module, ModeloptBaseConfig], tuple[Module, dict[str, Any]]] | Callable[[Module, ModeloptBaseConfig, Any], tuple[Module, dict[str, Any]]]

The mode’s entrypoint for converting a model.

property export_mode: str | None

The mode that corresponds to the export mode of this mode.

property name: str

Returns the value (str representation) of the mode.

property next_modes: set[str] | None

Modes that must immediately follow this mode.

property restore: Callable[[Module, ModeloptBaseConfig, dict[str, Any]], Module]

The mode’s entrypoint for restoring a model.

property search_algorithm: type[BaseSearcher]

Specifies the search algorithm to use for this mode (if any).

property update_for_new_mode: Callable[[Module, ModeloptBaseConfig, dict[str, Any]], None]

The mode’s entrypoint for updating the models state before new mode.

property update_for_save: Callable[[Module, ModeloptBaseConfig, dict[str, Any]], None]

The mode’s entrypoint for updating the models state before saving.

class AutoNASPatchManager

Bases: PatchManager

A class to handle the monkey patching of the model for automode.

property sample_during_training: bool

Indicates whether we should sample a new subnet during training.

class EvolveSearcher

Bases: IterativeSearcher

An iterative searcher that uses an evolutionary algorithm to optimize the subnet config.

after_step()

Update population after each iterative step.

Return type:

None

Set the lower bound of the constraints to 0.85 * upper bound before search.

Return type:

None

before_step()

Update candidates and population before each iterative step.

Return type:

None

candidates: list[dict[str, Any]]
property default_search_config: dict[str, Any]

Default search config contains additional algorithm parameters.

property default_state_dict: dict[str, Any]

Return default state dict.

population: list[dict[str, Any]]
sample()

Sampling a new subnet involves random sampling, mutation, and crossover.

Return type:

dict[str, Any]

ModeloptConfig ExportConfig

Bases: ModeloptBaseConfig

Configuration for the export mode.

This mode is used to export a model after NAS search.

Show default config as JSON
Default config (JSON):

{
   "strict": true,
   "calib": false
}

field calib: bool

Show details

Whether to calibrate the subnet before exporting.

field strict: bool

Show details

Enforces that the subnet configuration must exactly match during export.

class ExportModeDescriptor

Bases: ModeDescriptor

Class to describe the "export" mode.

The properties of this mode can be inspected via the source code.

property config_class: type[ModeloptBaseConfig]

Specifies the config class for the mode.

property convert: Callable[[Module, ModeloptBaseConfig], tuple[Module, dict[str, Any]]] | Callable[[Module, ModeloptBaseConfig, Any], tuple[Module, dict[str, Any]]]

The mode’s entrypoint for converting a model.

property is_export_mode: bool

Whether the mode is an export mode.

Returns:

True if the mode is an export mode, False otherwise. Defaults to False.

property name: str

Returns the value (str representation) of the mode.

property restore: Callable[[Module, ModeloptBaseConfig, dict[str, Any]], Module]

The mode’s entrypoint for restoring a model.

class IterativeSearcher

Bases: BaseSearcher, ABC

Base class for iterative search algorithms.

Select best model.

Return type:

None

after_step()

Run after each iterative step.

Return type:

None

Ensure that the model is actually configurable and ready for eval.

Return type:

None

before_step()

Run before each iterative step.

Return type:

None

best: dict[str, Any]
best_history: dict[str, Any]
candidate: dict[str, Any]
constraints_func: ConstraintsFunc
property default_search_config: dict[str, Any]

Get the default config for the searcher.

property default_state_dict: dict[str, Any]

Return default state dict.

early_stop()

Check if we should early stop the search if possible.

Return type:

bool

history: dict[str, Any]
iter_num: int
num_satisfied: int

Run iterative search loop.

Return type:

None

run_step()

The main routine of each iterative step.

Return type:

None

abstract sample()

Sample and select new sub-net configuration and return configuration.

Return type:

dict[str, Any]

samples: dict[str, Any]
sanitize_search_config(config)

Sanitize the search config dict.

Parameters:

config (dict[str, Any] | None)

Return type:

dict[str, Any]

class RandomSearcher

Bases: IterativeSearcher

An iterative searcher that samples subnets randomly.

sample()

Random sample new subset during each steo.

Return type:

dict[str, Any]

convert_autonas_searchspace(model, config)

Convert search space for AutoNAS mode with correct patch manager.

Parameters:
Return type:

tuple[Module, dict[str, Any]]

convert_searchspace(model, config, patch_manager_type)

Convert given model into a search space.

Parameters:
Return type:

tuple[Module, dict[str, Any]]

export_searchspace(model, config)

Export a subnet configuration of the search space to a regular model.

Parameters:
Return type:

tuple[Module, dict[str, Any]]

restore_autonas_searchspace(model, config, metadata)

Restore search space for AutoNAS mode with correct patch manager.

Parameters:
Return type:

Module

restore_export(model, config, metadata)

Restore & export the subnet configuration of the search space to a regular model.

Parameters:
  • model (Module)

  • config (ExportConfig)

  • metadata (dict[str, Any])

Return type:

Module

restore_searchspace(model, config, metadata, patch_manager)

Restore a search space from the given model.

Parameters:
Return type:

Module

update_autonas_metadata(model, config, metadata)

Update subnet config to current subnet config of model.

Parameters:
Return type:

None