Configuration#
Warp has settings at the global, module, and kernel level that can be used to fine-tune the compilation and verbosity
of Warp programs. In cases in which a setting can be changed at multiple levels (e.g.: enable_backward),
the setting at the more-specific scope takes precedence.
Global Settings#
Settings can be modified by direct assignment before or after calling wp.init(),
though some settings only take effect if set prior to initialization.
For example, the location of the user kernel cache can be changed with:
import os
import warp as wp
example_dir = os.path.dirname(os.path.realpath(__file__))
# set default cache directory before wp.init()
wp.config.kernel_cache_dir = os.path.join(example_dir, "tmp", "warpcache1")
wp.init()
See warp.config for a complete list of global settings.
Module Settings#
Module-level settings to control runtime compilation and code generation may be changed by passing a dictionary of
option pairs to wp.set_module_options().
For example, compilation of backward passes for the kernel in an entire module can be disabled with:
wp.set_module_options({"enable_backward": False})
The options for a module can also be queried using wp.get_module_options().
Field |
Type |
Default Value |
Description |
|---|---|---|---|
|
String |
|
A module-level override of the |
|
Integer |
|
A module-level override of the |
|
Integer |
Global setting |
A module-level override of the |
|
Boolean |
Global setting |
A module-level override of the |
|
Boolean |
|
If |
|
Boolean |
|
If |
|
Boolean |
Global setting |
A module-level override of the |
|
Boolean |
Global setting |
A module-level override of the |
|
String |
|
A module-level override of the |
|
Integer |
256 |
The number of CUDA threads per block that kernels in the module will be compiled for. |
|
Boolean |
|
If |
|
Boolean |
|
A module-level override of the |
Kernel Settings#
Kernel-level settings can be passed as arguments to the @wp.kernel decorator.
Field |
Type |
Default Value |
Description |
|---|---|---|---|
|
Boolean |
|
If |
|
Module | |
|
Controls which module the kernel belongs to. If |
|
int | tuple |
|
CUDA |
@wp.kernel(enable_backward=False)
def scale_2(
x: wp.array(dtype=float),
y: wp.array(dtype=float),
):
y[0] = x[0] ** 2.0
@wp.kernel(module="unique")
def isolated_kernel(a: wp.array(dtype=float), b: wp.array(dtype=float)):
# This kernel will be registered in a new unique module created
# just for this kernel and its dependent functions and structs
tid = wp.tid()
b[tid] = a[tid] + 1.0
@wp.kernel(launch_bounds=(256, 1))
def bounded_kernel(a: wp.array(dtype=float)):
# CUDA __launch_bounds__ will be set to (256, 1)
tid = wp.tid()
a[tid] = a[tid] * 2.0