warp.load_aot_module#
- warp.load_aot_module(
- module,
- device=None,
- arch=None,
- module_dir=None,
- use_ptx=None,
- strip_hash=False,
Load a previously compiled module (ahead of time).
- Parameters:
module (Module | ModuleType | str) – The module to load.
device (Device | str | list[Device] | list[str] | None) – The device or devices to load the module on. If
None, load the module for the current device.arch (int | None) – The architecture to load the module for on all devices. If
None, the architecture to load for will be inferred from the current device.module_dir (str | PathLike | None) – The directory to load the module from. If not specified, the module will be loaded from the default cache directory.
use_ptx (bool | None) – Whether to load the module from PTX. This setting is only used when loading modules for the GPU. If
Noneon a CUDA device, Warp will try both PTX and CUBIN (PTX first) and load the first that exists. If neither exists, aFileNotFoundErroris raised listing all attempted paths.strip_hash (bool) –
Whether to strip the hash from the module and kernel names. Setting this value to
TrueorFalsewill update the module’s"strip_hash"option. If left atNone, the current value will be used.Warning: Do not enable
strip_hashfor modules that contain generic kernels. Generic kernels compile to multiple overloads, and the per-overload hash is required to distinguish them. Stripping the hash in this case will cause the module to fail to compile.
- Raises:
FileNotFoundError – If no matching binary is found. When
use_ptxisNoneon a CUDA device, both PTX and CUBIN candidates are tried before raising.TypeError – If the module argument is not a Module, a types.ModuleType, or a string.
- Return type:
None