warp.autograd.gradcheck#
- warp.autograd.gradcheck(
- function,
- dim=None,
- inputs=None,
- outputs=None,
- *,
- eps=1e-4,
- atol=1e-3,
- rtol=1e-2,
- raise_exception=True,
- input_output_mask=None,
- device=None,
- max_blocks=0,
- block_dim=256,
- max_inputs_per_var=-1,
- max_outputs_per_var=-1,
- plot_relative_error=False,
- plot_absolute_error=False,
- show_summary=True,
Checks whether the autodiff gradient of a Warp kernel matches finite differences.
Given the autodiff (\(\nabla_\text{AD}\)) and finite difference gradients (\(\nabla_\text{FD}\)), the check succeeds if the autodiff gradients contain no NaN values and the following condition holds:
\[|\nabla_\text{AD} - \nabla_\text{FD}| \leq atol + rtol \cdot |\nabla_\text{FD}|.\]The kernel function and its adjoint version are launched with the given inputs and outputs, as well as the provided
dim,max_blocks, andblock_dimarguments (seewarp.launch()for more details).Note
This function only supports Warp kernels whose input arguments precede the output arguments.
Only Warp arrays with
requires_grad=Trueare considered for the Jacobian computation.Structs arguments are not yet supported by this function to compute Jacobians.
- Parameters:
function (Kernel | Callable) – The Warp kernel function, decorated with the
@wp.kerneldecorator, or any function that involves Warp kernel launches.dim (tuple[int] | None) – The number of threads to launch the kernel, can be an integer, or a Tuple of ints. Only required if the function is a Warp kernel.
inputs (Sequence | None) – List of input variables.
outputs (Sequence | None) – List of output variables. Only required if the function is a Warp kernel.
eps (float) – The finite-difference step size.
atol (float) – The absolute tolerance for the gradient check.
rtol (float) – The relative tolerance for the gradient check.
raise_exception (bool) – If True, raises a ValueError if the gradient check fails.
input_output_mask (list[tuple[str | int, str | int]] | None) – List of tuples specifying the input-output pairs to compute the Jacobian for. Inputs and outputs can be identified either by their integer indices of where they appear in the kernel input/output arguments, or by the respective argument names as strings. If None, computes the Jacobian for all input-output pairs.
device (Device | str | None) – The device to launch on (optional)
max_blocks (int) – The maximum number of CUDA thread blocks to use.
block_dim (int) – The number of threads per block.
max_inputs_per_var (int) – Maximum number of input dimensions over which to evaluate the Jacobians for the input-output pairs. Evaluates all input dimensions if value <= 0.
max_outputs_per_var (int) – Maximum number of output dimensions over which to evaluate the Jacobians for the input-output pairs. Evaluates all output dimensions if value <= 0.
plot_relative_error (bool) – If True, visualizes the relative error of the Jacobians in a plot (requires
matplotlib).plot_absolute_error (bool) – If True, visualizes the absolute error of the Jacobians in a plot (requires
matplotlib).show_summary (bool) – If True, prints a summary table of the gradient check results.
- Returns:
True if the gradient check passes, False otherwise.
- Return type: