warp.kernel#
- warp.kernel(
- f=None,
- *,
- enable_backward=None,
- module=None,
- launch_bounds=None,
Decorator to register a Warp kernel from a Python function. The function must be defined with type annotations for all arguments. The function must not return anything.
Example:
@wp.kernel def my_kernel(a: wp.array(dtype=float), b: wp.array(dtype=float)): tid = wp.tid() b[tid] = a[tid] + 1.0 @wp.kernel(enable_backward=False) def my_kernel_no_backward(a: wp.array(dtype=float, ndim=2), x: float): # the backward pass will not be generated i, j = wp.tid() a[i, j] = x @wp.kernel(module="unique") def my_kernel_unique_module(a: wp.array(dtype=float), b: wp.array(dtype=float)): # the kernel will be registered in new unique module created just for this # kernel and its dependent functions and structs tid = wp.tid() b[tid] = a[tid] + 1.0 @wp.kernel(launch_bounds=(256, 1)) def my_kernel_with_launch_bounds(a: wp.array(dtype=float)): # CUDA __launch_bounds__ will be set to (256, 1) tid = wp.tid() a[tid] = a[tid] * 2.0
- Parameters:
f (Callable | None) – The function to be registered as a kernel.
enable_backward (bool | None) – If False, the backward pass will not be generated.
module (Module | Literal['unique'] | str | None) – The
warp._src.context.Moduleto which the kernel belongs. Alternatively, if a string “unique” is provided, the kernel is assigned to a new module named after the kernel name and hash. If None, the module is inferred from the function’s module.launch_bounds (tuple[int, ...] | int | None) – CUDA
__launch_bounds__attribute for the kernel. Can be an int (maxThreadsPerBlock) or a tuple of 1-2 ints(maxThreadsPerBlock, minBlocksPerMultiprocessor). Only applies to CUDA kernels. Note: Theblock_dimparameter inwp.launch()must not exceed themaxThreadsPerBlockvalue specified here.
- Returns:
The registered kernel.