cuda.core.graph.GraphAllocOptions#
- class cuda.core.graph.GraphAllocOptions( )#
Options for graph memory allocation nodes.
- device#
The device on which to allocate memory. If None (default), uses the current CUDA context’s device.
- memory_type#
Type of memory to allocate. One of:
"device"(default): Pinned device memory, optimal for GPU kernels."host": Pinned host memory, accessible from both host and device. Useful for graphs containing host callback nodes. Note: may not be supported on all systems/drivers."managed": Managed/unified memory that automatically migrates between host and device. Useful for mixed host/device access patterns.
- Type:
str, optional
- peer_access#
List of devices that should have read-write access to the allocated memory. If None (default), only the allocating device has access.
Notes
IPC (inter-process communication) is not supported for graph memory allocation nodes per CUDA documentation.
The allocation uses the device’s default memory pool.