conversion

Main APIs+entrypoints for model pruning.

Functions

export

Export a pruned subnet to a regular model.

convert

Convert a regular PyTorch model into a model that supports design space optimization.

convert(model, mode)

Convert a regular PyTorch model into a model that supports design space optimization.

Parameters:
  • model (Module | Type[Module] | Tuple | Callable) – A model-like object. Can be an nn.Module, a model class type, or a tuple. Tuple must be of the form (model_cls,) or (model_cls, args) or (model_cls, args, kwargs). Model will be initialized as model_cls(*args, **kwargs).

  • mode (_ModeDescriptor | str | List[_ModeDescriptor | str] | List[Tuple[str, Dict[str, Any]]]) –

    A (list of) string(s) or Mode(s) or a list of tuples containing the mode and its config indicating the desired mode(s) (and configurations) for the convert process. Modes set up the model for different algorithms for model optimization. The following modes are available:

    • "autonas": The model will be converted into a search space and set up to automatically perform operations required for AutoNAS-based model training, evaluation, and search. The mode’s config is described in AutoNASConfig.

    If the mode argument is specified as a dictionary, the keys should indicate the mode and the values specify the per-mode configuration. If not provided, then default configuration would be used.

Returns:

A converted model with the original weights preserved that can be used for model optimization.

Return type:

Module

Note

Note that model wrappers (such as DataParallel/DistributedDataParallel) are not supported during the convert process. Please wrap the model after the convert process.

Note

convert() relies on monkey patching to augment the forward(), eval(), and train() methods of model as well as augment individual modules to make them dynamic. This renders the conversion incompatible with other monkey patches to those methods and modules! Note that convert() is still fully compatible with inheritance.

Note

  1. Configs can be customized for individual layers using glob expressions on qualified submodule names, e.g., as shown for nn.Conv2d in the above example.

  2. Keys in the config that appear earlier in a dict have lower priority, e.g., backbone.stages.1.0.spatial_conv will have out_channels_ratio [0.334, 0.5, 0.667, 1.0], not [1.0].

  3. Config entries without layer qualifiers are also supported, e.g., as shown for nn.Sequential in the above example.

  4. Mixed usage of configurations with and without layer qualifiers is supported for different layers, e.g., as shown for nn.Conv2d and nn.Sequential in the above example. For a specific layer type, only configurations with or without layer qualifiers are supported.

  5. Use * as a wildcard matching any layer.

export(model, strict=True, calib=False)

Export a pruned subnet to a regular model.

Parameters:
  • model (Module) – The pruned subnet to be exported.

  • strict (bool) – Raise an error when the config does not contain all necessary keys.

  • calib (bool) – Whether to calibrate the subnet to be exported.

Returns:

The current active subnet in regular PyTorch model format.

Return type:

Module

Note

if model is a wrapper such as DistributedDataParallel, it will be unwrapped, e.g., model.module will be returned.