cuda.core.graph.GraphAllocOptions#

class cuda.core.graph.GraphAllocOptions(
device: int | 'Device' | None = None,
memory_type: str = 'device',
peer_access: list | None = None,
)#

Options for graph memory allocation nodes.

device#

The device on which to allocate memory. If None (default), uses the current CUDA context’s device.

Type:

int or Device, optional

memory_type#

Type of memory to allocate. One of:

  • "device" (default): Pinned device memory, optimal for GPU kernels.

  • "host": Pinned host memory, accessible from both host and device. Useful for graphs containing host callback nodes. Note: may not be supported on all systems/drivers.

  • "managed": Managed/unified memory that automatically migrates between host and device. Useful for mixed host/device access patterns.

Type:

str, optional

peer_access#

List of devices that should have read-write access to the allocated memory. If None (default), only the allocating device has access.

Type:

list of int or Device, optional

Notes

  • IPC (inter-process communication) is not supported for graph memory allocation nodes per CUDA documentation.

  • The allocation uses the device’s default memory pool.