driver

Data types used by CUDA driver

class cuda.bindings.driver.CUuuid_st(void_ptr _ptr=0)
bytes

< CUDA definition of UUID

Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemFabricHandle_st(void_ptr _ptr=0)

Fabric handle - An opaque handle representing a memory allocation that can be exported to processes in same or different nodes. For IPC between processes on different nodes they must be connected via the NVSwitch fabric.

data
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUipcEventHandle_st(void_ptr _ptr=0)

CUDA IPC event handle

reserved
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUipcMemHandle_st(void_ptr _ptr=0)

CUDA IPC mem handle

reserved
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUstreamBatchMemOpParams_union(void_ptr _ptr=0)

Per-operation parameters for cuStreamBatchMemOp

operation
Type:

CUstreamBatchMemOpType

waitValue
Type:

CUstreamMemOpWaitValueParams_st

writeValue
Type:

CUstreamMemOpWriteValueParams_st

flushRemoteWrites
Type:

CUstreamMemOpFlushRemoteWritesParams_st

memoryBarrier
Type:

CUstreamMemOpMemoryBarrierParams_st

pad
Type:

List[cuuint64_t]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_BATCH_MEM_OP_NODE_PARAMS_v1_st(void_ptr _ptr=0)
ctx
Type:

CUcontext

count
Type:

unsigned int

paramArray
Type:

CUstreamBatchMemOpParams

flags
Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_BATCH_MEM_OP_NODE_PARAMS_v2_st(void_ptr _ptr=0)

Batch memory operation node parameters

ctx

Context to use for the operations.

Type:

CUcontext

count

Number of operations in paramArray.

Type:

unsigned int

paramArray

Array of batch memory operations.

Type:

CUstreamBatchMemOpParams

flags

Flags to control the node.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUasyncNotificationInfo_st(void_ptr _ptr=0)

Information passed to the user via the async notification callback

type
Type:

CUasyncNotificationType

info
Type:

anon_union2

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUdevprop_st(void_ptr _ptr=0)

Legacy device properties

maxThreadsPerBlock

Maximum number of threads per block

Type:

int

maxThreadsDim

Maximum size of each dimension of a block

Type:

List[int]

maxGridSize

Maximum size of each dimension of a grid

Type:

List[int]

sharedMemPerBlock

Shared memory available per block in bytes

Type:

int

totalConstantMemory

Constant memory available on device in bytes

Type:

int

SIMDWidth

Warp size in threads

Type:

int

memPitch

Maximum pitch in bytes allowed by memory copies

Type:

int

regsPerBlock

32-bit registers available per block

Type:

int

clockRate

Clock frequency in kilohertz

Type:

int

textureAlign

Alignment requirement for textures

Type:

int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUaccessPolicyWindow_st(void_ptr _ptr=0)

Specifies an access policy for a window, a contiguous extent of memory beginning at base_ptr and ending at base_ptr + num_bytes. num_bytes is limited by CU_DEVICE_ATTRIBUTE_MAX_ACCESS_POLICY_WINDOW_SIZE. Partition into many segments and assign segments such that: sum of “hit segments” / window == approx. ratio. sum of “miss segments” / window == approx 1-ratio. Segments and ratio specifications are fitted to the capabilities of the architecture. Accesses in a hit segment apply the hitProp access policy. Accesses in a miss segment apply the missProp access policy.

base_ptr

Starting address of the access policy window. CUDA driver may align it.

Type:

Any

num_bytes

Size in bytes of the window policy. CUDA driver may restrict the maximum size and alignment.

Type:

size_t

hitRatio

hitRatio specifies percentage of lines assigned hitProp, rest are assigned missProp.

Type:

float

hitProp

CUaccessProperty set for hit.

Type:

CUaccessProperty

missProp

CUaccessProperty set for miss. Must be either NORMAL or STREAMING

Type:

CUaccessProperty

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_KERNEL_NODE_PARAMS_st(void_ptr _ptr=0)

GPU kernel node parameters

func

Kernel to launch

Type:

CUfunction

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

kernelParams

Array of pointers to kernel parameters

Type:

Any

extra

Extra options

Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_KERNEL_NODE_PARAMS_v2_st(void_ptr _ptr=0)

GPU kernel node parameters

func

Kernel to launch

Type:

CUfunction

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

kernelParams

Array of pointers to kernel parameters

Type:

Any

extra

Extra options

Type:

Any

kern

Kernel to launch, will only be referenced if func is NULL

Type:

CUkernel

ctx

Context for the kernel task to run in. The value NULL will indicate the current context should be used by the api. This field is ignored if func is set.

Type:

CUcontext

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_KERNEL_NODE_PARAMS_v3_st(void_ptr _ptr=0)

GPU kernel node parameters

func

Kernel to launch

Type:

CUfunction

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

kernelParams

Array of pointers to kernel parameters

Type:

Any

extra

Extra options

Type:

Any

kern

Kernel to launch, will only be referenced if func is NULL

Type:

CUkernel

ctx

Context for the kernel task to run in. The value NULL will indicate the current context should be used by the api. This field is ignored if func is set.

Type:

CUcontext

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMSET_NODE_PARAMS_st(void_ptr _ptr=0)

Memset node parameters

dst

Destination device pointer

Type:

CUdeviceptr

pitch

Pitch of destination device pointer. Unused if height is 1

Type:

size_t

value

Value to be set

Type:

unsigned int

elementSize

Size of each element in bytes. Must be 1, 2, or 4.

Type:

unsigned int

width

Width of the row in elements

Type:

size_t

height

Number of rows

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMSET_NODE_PARAMS_v2_st(void_ptr _ptr=0)

Memset node parameters

dst

Destination device pointer

Type:

CUdeviceptr

pitch

Pitch of destination device pointer. Unused if height is 1

Type:

size_t

value

Value to be set

Type:

unsigned int

elementSize

Size of each element in bytes. Must be 1, 2, or 4.

Type:

unsigned int

width

Width of the row in elements

Type:

size_t

height

Number of rows

Type:

size_t

ctx

Context on which to run the node

Type:

CUcontext

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_HOST_NODE_PARAMS_st(void_ptr _ptr=0)

Host node parameters

fn

The function to call when the node executes

Type:

CUhostFn

userData

Argument to pass to the function

Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_HOST_NODE_PARAMS_v2_st(void_ptr _ptr=0)

Host node parameters

fn

The function to call when the node executes

Type:

CUhostFn

userData

Argument to pass to the function

Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_CONDITIONAL_NODE_PARAMS(void_ptr _ptr=0)

Conditional node parameters

handle

Conditional node handle. Handles must be created in advance of creating the node using cuGraphConditionalHandleCreate.

Type:

CUgraphConditionalHandle

type

Type of conditional node.

Type:

CUgraphConditionalNodeType

size

Size of graph output array. Must be 1.

Type:

unsigned int

phGraph_out

CUDA-owned array populated with conditional node child graphs during creation of the node. Valid for the lifetime of the conditional node. The contents of the graph(s) are subject to the following constraints: - Allowed node types are kernel nodes, empty nodes, child graphs, memsets, memcopies, and conditionals. This applies recursively to child graphs and conditional bodies. - All kernels, including kernels in nested conditionals or child graphs at any level, must belong to the same CUDA context. These graphs may be populated using graph node creation APIs or cuStreamBeginCaptureToGraph.

Type:

CUgraph

ctx

Context on which to run the node. Must match context used to create the handle and all body nodes.

Type:

CUcontext

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphEdgeData_st(void_ptr _ptr=0)

Optional annotation for edges in a CUDA graph. Note, all edges implicitly have annotations and default to a zero-initialized value if not specified. A zero-initialized struct indicates a standard full serialization of two nodes with memory visibility.

from_port

This indicates when the dependency is triggered from the upstream node on the edge. The meaning is specfic to the node type. A value of 0 in all cases means full completion of the upstream node, with memory visibility to the downstream node or portion thereof (indicated by to_port). Only kernel nodes define non-zero ports. A kernel node can use the following output port types: CU_GRAPH_KERNEL_NODE_PORT_DEFAULT, CU_GRAPH_KERNEL_NODE_PORT_PROGRAMMATIC, or CU_GRAPH_KERNEL_NODE_PORT_LAUNCH_ORDER.

Type:

bytes

to_port

This indicates what portion of the downstream node is dependent on the upstream node or portion thereof (indicated by from_port). The meaning is specific to the node type. A value of 0 in all cases means the entirety of the downstream node is dependent on the upstream work. Currently no node types define non-zero ports. Accordingly, this field must be set to zero.

Type:

bytes

type

This should be populated with a value from CUgraphDependencyType. (It is typed as char due to compiler-specific layout of bitfields.) See CUgraphDependencyType.

Type:

bytes

reserved

These bytes are unused and must be zeroed. This ensures compatibility if additional fields are added in the future.

Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_GRAPH_INSTANTIATE_PARAMS_st(void_ptr _ptr=0)

Graph instantiation parameters

flags

Instantiation flags

Type:

cuuint64_t

hUploadStream

Upload stream

Type:

CUstream

hErrNode_out

The node which caused instantiation to fail, if any

Type:

CUgraphNode

result_out

Whether instantiation was successful. If it failed, the reason why

Type:

CUgraphInstantiateResult

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlaunchMemSyncDomainMap_st(void_ptr _ptr=0)

Memory Synchronization Domain map See ::cudaLaunchMemSyncDomain. By default, kernels are launched in domain 0. Kernel launched with CU_LAUNCH_MEM_SYNC_DOMAIN_REMOTE will have a different domain ID. User may also alter the domain ID with CUlaunchMemSyncDomainMap for a specific stream / graph node / kernel launch. See CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP. Domain ID range is available through CU_DEVICE_ATTRIBUTE_MEM_SYNC_DOMAIN_COUNT.

default_

The default domain ID to use for designated kernels

Type:

bytes

remote

The remote domain ID to use for designated kernels

Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlaunchAttributeValue_union(void_ptr _ptr=0)

Launch attributes union; used as value field of CUlaunchAttribute

pad
Type:

bytes

accessPolicyWindow

Value of launch attribute CU_LAUNCH_ATTRIBUTE_ACCESS_POLICY_WINDOW.

Type:

CUaccessPolicyWindow

cooperative

Value of launch attribute CU_LAUNCH_ATTRIBUTE_COOPERATIVE. Nonzero indicates a cooperative kernel (see cuLaunchCooperativeKernel).

Type:

int

syncPolicy

Value of launch attribute CU_LAUNCH_ATTRIBUTE_SYNCHRONIZATION_POLICY. ::CUsynchronizationPolicy for work queued up in this stream

Type:

CUsynchronizationPolicy

clusterDim

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_DIMENSION that represents the desired cluster dimensions for the kernel. Opaque type with the following fields: - x - The X dimension of the cluster, in blocks. Must be a divisor of the grid X dimension. - y - The Y dimension of the cluster, in blocks. Must be a divisor of the grid Y dimension. - z - The Z dimension of the cluster, in blocks. Must be a divisor of the grid Z dimension.

Type:

anon_struct1

clusterSchedulingPolicyPreference

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE. Cluster scheduling policy preference for the kernel.

Type:

CUclusterSchedulingPolicy

programmaticStreamSerializationAllowed

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_STREAM_SERIALIZATION.

Type:

int

programmaticEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_EVENT with the following fields: - CUevent event - Event to fire when all blocks trigger it. - Event record flags, see cuEventRecordWithFlags. Does not accept :CU_EVENT_RECORD_EXTERNAL. - triggerAtBlockStart - If this is set to non-0, each block launch will automatically trigger the event.

Type:

anon_struct2

launchCompletionEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_LAUNCH_COMPLETION_EVENT with the following fields: - CUevent event - Event to fire when the last block launches - int flags; - Event record flags, see cuEventRecordWithFlags. Does not accept CU_EVENT_RECORD_EXTERNAL.

Type:

anon_struct3

priority

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PRIORITY. Execution priority of the kernel.

Type:

int

memSyncDomainMap

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP. See CUlaunchMemSyncDomainMap.

Type:

CUlaunchMemSyncDomainMap

memSyncDomain

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN. See::CUlaunchMemSyncDomain

Type:

CUlaunchMemSyncDomain

deviceUpdatableKernelNode

Value of launch attribute CU_LAUNCH_ATTRIBUTE_DEVICE_UPDATABLE_KERNEL_NODE. with the following fields: - int deviceUpdatable - Whether or not the resulting kernel node should be device-updatable. - CUgraphDeviceNode devNode - Returns a handle to pass to the various device-side update functions.

Type:

anon_struct4

sharedMemCarveout

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlaunchAttribute_st(void_ptr _ptr=0)

Launch attribute

id

Attribute to set

Type:

CUlaunchAttributeID

value

Value of the attribute

Type:

CUlaunchAttributeValue

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlaunchConfig_st(void_ptr _ptr=0)

CUDA extensible launch configuration

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

hStream

Stream identifier

Type:

CUstream

attrs

List of attributes; nullable if CUlaunchConfig::numAttrs == 0

Type:

CUlaunchAttribute

numAttrs

Number of attributes populated in CUlaunchConfig::attrs

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUexecAffinitySmCount_st(void_ptr _ptr=0)

Value for CU_EXEC_AFFINITY_TYPE_SM_COUNT

val

The number of SMs the context is limited to use.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUexecAffinityParam_st(void_ptr _ptr=0)

Execution Affinity Parameters

type
Type:

CUexecAffinityType

param
Type:

anon_union3

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUctxCigParam_st(void_ptr _ptr=0)

CIG Context Create Params

sharedDataType
Type:

CUcigDataType

sharedData
Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUctxCreateParams_st(void_ptr _ptr=0)

Params for creating CUDA context Exactly one of execAffinityParams and cigParams must be non-NULL.

execAffinityParams
Type:

CUexecAffinityParam

numExecAffinityParams
Type:

int

cigParams
Type:

CUctxCigParam

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlibraryHostUniversalFunctionAndDataTable_st(void_ptr _ptr=0)
functionTable
Type:

Any

functionWindowSize
Type:

size_t

dataTable
Type:

Any

dataWindowSize
Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY2D_st(void_ptr _ptr=0)

2D memory copy parameters

srcXInBytes

Source X in bytes

Type:

size_t

srcY

Source Y

Type:

size_t

srcMemoryType

Source memory type (host, device, array)

Type:

CUmemorytype

srcHost

Source host pointer

Type:

Any

srcDevice

Source device pointer

Type:

CUdeviceptr

srcArray

Source array reference

Type:

CUarray

srcPitch

Source pitch (ignored when src is array)

Type:

size_t

dstXInBytes

Destination X in bytes

Type:

size_t

dstY

Destination Y

Type:

size_t

dstMemoryType

Destination memory type (host, device, array)

Type:

CUmemorytype

dstHost

Destination host pointer

Type:

Any

dstDevice

Destination device pointer

Type:

CUdeviceptr

dstArray

Destination array reference

Type:

CUarray

dstPitch

Destination pitch (ignored when dst is array)

Type:

size_t

WidthInBytes

Width of 2D memory copy in bytes

Type:

size_t

Height

Height of 2D memory copy

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY3D_st(void_ptr _ptr=0)

3D memory copy parameters

srcXInBytes

Source X in bytes

Type:

size_t

srcY

Source Y

Type:

size_t

srcZ

Source Z

Type:

size_t

srcLOD

Source LOD

Type:

size_t

srcMemoryType

Source memory type (host, device, array)

Type:

CUmemorytype

srcHost

Source host pointer

Type:

Any

srcDevice

Source device pointer

Type:

CUdeviceptr

srcArray

Source array reference

Type:

CUarray

reserved0

Must be NULL

Type:

Any

srcPitch

Source pitch (ignored when src is array)

Type:

size_t

srcHeight

Source height (ignored when src is array; may be 0 if Depth==1)

Type:

size_t

dstXInBytes

Destination X in bytes

Type:

size_t

dstY

Destination Y

Type:

size_t

dstZ

Destination Z

Type:

size_t

dstLOD

Destination LOD

Type:

size_t

dstMemoryType

Destination memory type (host, device, array)

Type:

CUmemorytype

dstHost

Destination host pointer

Type:

Any

dstDevice

Destination device pointer

Type:

CUdeviceptr

dstArray

Destination array reference

Type:

CUarray

reserved1

Must be NULL

Type:

Any

dstPitch

Destination pitch (ignored when dst is array)

Type:

size_t

dstHeight

Destination height (ignored when dst is array; may be 0 if Depth==1)

Type:

size_t

WidthInBytes

Width of 3D memory copy in bytes

Type:

size_t

Height

Height of 3D memory copy

Type:

size_t

Depth

Depth of 3D memory copy

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY3D_PEER_st(void_ptr _ptr=0)

3D memory cross-context copy parameters

srcXInBytes

Source X in bytes

Type:

size_t

srcY

Source Y

Type:

size_t

srcZ

Source Z

Type:

size_t

srcLOD

Source LOD

Type:

size_t

srcMemoryType

Source memory type (host, device, array)

Type:

CUmemorytype

srcHost

Source host pointer

Type:

Any

srcDevice

Source device pointer

Type:

CUdeviceptr

srcArray

Source array reference

Type:

CUarray

srcContext

Source context (ignored with srcMemoryType is CU_MEMORYTYPE_ARRAY)

Type:

CUcontext

srcPitch

Source pitch (ignored when src is array)

Type:

size_t

srcHeight

Source height (ignored when src is array; may be 0 if Depth==1)

Type:

size_t

dstXInBytes

Destination X in bytes

Type:

size_t

dstY

Destination Y

Type:

size_t

dstZ

Destination Z

Type:

size_t

dstLOD

Destination LOD

Type:

size_t

dstMemoryType

Destination memory type (host, device, array)

Type:

CUmemorytype

dstHost

Destination host pointer

Type:

Any

dstDevice

Destination device pointer

Type:

CUdeviceptr

dstArray

Destination array reference

Type:

CUarray

dstContext

Destination context (ignored with dstMemoryType is CU_MEMORYTYPE_ARRAY)

Type:

CUcontext

dstPitch

Destination pitch (ignored when dst is array)

Type:

size_t

dstHeight

Destination height (ignored when dst is array; may be 0 if Depth==1)

Type:

size_t

WidthInBytes

Width of 3D memory copy in bytes

Type:

size_t

Height

Height of 3D memory copy

Type:

size_t

Depth

Depth of 3D memory copy

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY_NODE_PARAMS_st(void_ptr _ptr=0)

Memcpy node parameters

flags

Must be zero

Type:

int

reserved

Must be zero

Type:

int

copyCtx

Context on which to run the node

Type:

CUcontext

copyParams

Parameters for the memory copy

Type:

CUDA_MEMCPY3D

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY_DESCRIPTOR_st(void_ptr _ptr=0)

Array descriptor

Width

Width of array

Type:

size_t

Height

Height of array

Type:

size_t

Format

Array format

Type:

CUarray_format

NumChannels

Channels per array element

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY3D_DESCRIPTOR_st(void_ptr _ptr=0)

3D array descriptor

Width

Width of 3D array

Type:

size_t

Height

Height of 3D array

Type:

size_t

Depth

Depth of 3D array

Type:

size_t

Format

Array format

Type:

CUarray_format

NumChannels

Channels per array element

Type:

unsigned int

Flags

Flags

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY_SPARSE_PROPERTIES_st(void_ptr _ptr=0)

CUDA array sparse properties

tileExtent
Type:

anon_struct5

miptailFirstLevel

First mip level at which the mip tail begins.

Type:

unsigned int

miptailSize

Total size of the mip tail.

Type:

unsigned long long

flags

Flags will either be zero or CU_ARRAY_SPARSE_PROPERTIES_SINGLE_MIPTAIL

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY_MEMORY_REQUIREMENTS_st(void_ptr _ptr=0)

CUDA array memory requirements

size

Total required memory size

Type:

size_t

alignment

alignment requirement

Type:

size_t

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_RESOURCE_DESC_st(void_ptr _ptr=0)

CUDA Resource descriptor

resType

Resource type

Type:

CUresourcetype

res
Type:

anon_union4

flags

Flags (must be zero)

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_TEXTURE_DESC_st(void_ptr _ptr=0)

Texture descriptor

addressMode

Address modes

Type:

List[CUaddress_mode]

filterMode

Filter mode

Type:

CUfilter_mode

flags

Flags

Type:

unsigned int

maxAnisotropy

Maximum anisotropy ratio

Type:

unsigned int

mipmapFilterMode

Mipmap filter mode

Type:

CUfilter_mode

mipmapLevelBias

Mipmap level bias

Type:

float

minMipmapLevelClamp

Mipmap minimum level clamp

Type:

float

maxMipmapLevelClamp

Mipmap maximum level clamp

Type:

float

borderColor

Border Color

Type:

List[float]

reserved
Type:

List[int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_RESOURCE_VIEW_DESC_st(void_ptr _ptr=0)

Resource view descriptor

format

Resource view format

Type:

CUresourceViewFormat

width

Width of the resource view

Type:

size_t

height

Height of the resource view

Type:

size_t

depth

Depth of the resource view

Type:

size_t

firstMipmapLevel

First defined mipmap level

Type:

unsigned int

lastMipmapLevel

Last defined mipmap level

Type:

unsigned int

firstLayer

First layer index

Type:

unsigned int

lastLayer

Last layer index

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUtensorMap_st(void_ptr _ptr=0)

Tensor map descriptor. Requires compiler support for aligning to 64 bytes.

opaque
Type:

List[cuuint64_t]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_POINTER_ATTRIBUTE_P2P_TOKENS_st(void_ptr _ptr=0)

GPU Direct v3 tokens

p2pToken
Type:

unsigned long long

vaSpaceToken
Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_LAUNCH_PARAMS_st(void_ptr _ptr=0)

Kernel launch parameters

function

Kernel to launch

Type:

CUfunction

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

hStream

Stream identifier

Type:

CUstream

kernelParams

Array of pointers to kernel parameters

Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_MEMORY_HANDLE_DESC_st(void_ptr _ptr=0)

External memory handle descriptor

type

Type of the handle

Type:

CUexternalMemoryHandleType

handle
Type:

anon_union5

size

Size of the memory allocation

Type:

unsigned long long

flags

Flags must either be zero or CUDA_EXTERNAL_MEMORY_DEDICATED

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_MEMORY_BUFFER_DESC_st(void_ptr _ptr=0)

External memory buffer descriptor

offset

Offset into the memory object where the buffer’s base is

Type:

unsigned long long

size

Size of the buffer

Type:

unsigned long long

flags

Flags reserved for future use. Must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_MEMORY_MIPMAPPED_ARRAY_DESC_st(void_ptr _ptr=0)

External memory mipmap descriptor

offset

Offset into the memory object where the base level of the mipmap chain is.

Type:

unsigned long long

arrayDesc

Format, dimension and type of base level of the mipmap chain

Type:

CUDA_ARRAY3D_DESCRIPTOR

numLevels

Total number of levels in the mipmap chain

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC_st(void_ptr _ptr=0)

External semaphore handle descriptor

type

Type of the handle

Type:

CUexternalSemaphoreHandleType

handle
Type:

anon_union6

flags

Flags reserved for the future. Must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS_st(void_ptr _ptr=0)

External semaphore signal parameters

params
Type:

anon_struct15

flags

Only when ::CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS is used to signal a CUexternalSemaphore of type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC, the valid flag is CUDA_EXTERNAL_SEMAPHORE_SIGNAL_SKIP_NVSCIBUF_MEMSYNC which indicates that while signaling the CUexternalSemaphore, no memory synchronization operations should be performed for any external memory object imported as CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF. For all other types of CUexternalSemaphore, flags must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS_st(void_ptr _ptr=0)

External semaphore wait parameters

params
Type:

anon_struct18

flags

Only when ::CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS is used to wait on a CUexternalSemaphore of type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC, the valid flag is CUDA_EXTERNAL_SEMAPHORE_WAIT_SKIP_NVSCIBUF_MEMSYNC which indicates that while waiting for the CUexternalSemaphore, no memory synchronization operations should be performed for any external memory object imported as CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF. For all other types of CUexternalSemaphore, flags must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXT_SEM_SIGNAL_NODE_PARAMS_st(void_ptr _ptr=0)

Semaphore signal node parameters

extSemArray

Array of external semaphore handles.

Type:

CUexternalSemaphore

paramsArray

Array of external semaphore signal parameters.

Type:

CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS

numExtSems

Number of handles and parameters supplied in extSemArray and paramsArray.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXT_SEM_SIGNAL_NODE_PARAMS_v2_st(void_ptr _ptr=0)

Semaphore signal node parameters

extSemArray

Array of external semaphore handles.

Type:

CUexternalSemaphore

paramsArray

Array of external semaphore signal parameters.

Type:

CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS

numExtSems

Number of handles and parameters supplied in extSemArray and paramsArray.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXT_SEM_WAIT_NODE_PARAMS_st(void_ptr _ptr=0)

Semaphore wait node parameters

extSemArray

Array of external semaphore handles.

Type:

CUexternalSemaphore

paramsArray

Array of external semaphore wait parameters.

Type:

CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS

numExtSems

Number of handles and parameters supplied in extSemArray and paramsArray.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXT_SEM_WAIT_NODE_PARAMS_v2_st(void_ptr _ptr=0)

Semaphore wait node parameters

extSemArray

Array of external semaphore handles.

Type:

CUexternalSemaphore

paramsArray

Array of external semaphore wait parameters.

Type:

CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS

numExtSems

Number of handles and parameters supplied in extSemArray and paramsArray.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUarrayMapInfo_st(void_ptr _ptr=0)

Specifies the CUDA array or CUDA mipmapped array memory mapping information

resourceType

Resource type

Type:

CUresourcetype

resource
Type:

anon_union9

subresourceType

Sparse subresource type

Type:

CUarraySparseSubresourceType

subresource
Type:

anon_union10

memOperationType

Memory operation type

Type:

CUmemOperationType

memHandleType

Memory handle type

Type:

CUmemHandleType

memHandle
Type:

anon_union11

offset

Offset within mip tail Offset within the memory

Type:

unsigned long long

deviceBitMask

Device ordinal bit mask

Type:

unsigned int

flags

flags for future use, must be zero now.

Type:

unsigned int

reserved

Reserved for future use, must be zero now.

Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemLocation_st(void_ptr _ptr=0)

Specifies a memory location.

type

Specifies the location type, which modifies the meaning of id.

Type:

CUmemLocationType

id

identifier for a given this location’s CUmemLocationType.

Type:

int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemAllocationProp_st(void_ptr _ptr=0)

Specifies the allocation properties for a allocation.

type

Allocation type

Type:

CUmemAllocationType

requestedHandleTypes

requested CUmemAllocationHandleType

Type:

CUmemAllocationHandleType

location

Location of allocation

Type:

CUmemLocation

win32HandleMetaData

Windows-specific POBJECT_ATTRIBUTES required when CU_MEM_HANDLE_TYPE_WIN32 is specified. This object attributes structure includes security attributes that define the scope of which exported allocations may be transferred to other processes. In all other cases, this field is required to be zero.

Type:

Any

allocFlags
Type:

anon_struct21

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmulticastObjectProp_st(void_ptr _ptr=0)

Specifies the properties for a multicast object.

numDevices

The number of devices in the multicast team that will bind memory to this object

Type:

unsigned int

size

The maximum amount of memory that can be bound to this multicast object per device

Type:

size_t

handleTypes

Bitmask of exportable handle types (see CUmemAllocationHandleType) for this object

Type:

unsigned long long

flags

Flags for future use, must be zero now

Type:

unsigned long long

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemAccessDesc_st(void_ptr _ptr=0)

Memory access descriptor

location

Location on which the request is to change it’s accessibility

Type:

CUmemLocation

flags

::CUmemProt accessibility flags to set on the request

Type:

CUmemAccess_flags

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphExecUpdateResultInfo_st(void_ptr _ptr=0)

Result information returned by cuGraphExecUpdate

result

Gives more specific detail when a cuda graph update fails.

Type:

CUgraphExecUpdateResult

errorNode

The “to node” of the error edge when the topologies do not match. The error node when the error is associated with a specific node. NULL when the error is generic.

Type:

CUgraphNode

errorFromNode

The from node of error edge when the topologies do not match. Otherwise NULL.

Type:

CUgraphNode

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemPoolProps_st(void_ptr _ptr=0)

Specifies the properties of allocations made from the pool.

allocType

Allocation type. Currently must be specified as CU_MEM_ALLOCATION_TYPE_PINNED

Type:

CUmemAllocationType

handleTypes

Handle types that will be supported by allocations from the pool.

Type:

CUmemAllocationHandleType

location

Location where allocations should reside.

Type:

CUmemLocation

win32SecurityAttributes

Windows-specific LPSECURITYATTRIBUTES required when CU_MEM_HANDLE_TYPE_WIN32 is specified. This security attribute defines the scope of which exported allocations may be transferred to other processes. In all other cases, this field is required to be zero.

Type:

Any

maxSize

Maximum pool size. When set to 0, defaults to a system dependent value.

Type:

size_t

usage

Bitmask indicating intended usage for the pool.

Type:

unsigned short

reserved

reserved for future use, must be 0

Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemPoolPtrExportData_st(void_ptr _ptr=0)

Opaque data for exporting a pool allocation

reserved
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEM_ALLOC_NODE_PARAMS_v1_st(void_ptr _ptr=0)

Memory allocation node parameters

poolProps

in: location where the allocation should reside (specified in ::location). ::handleTypes must be CU_MEM_HANDLE_TYPE_NONE. IPC is not supported.

Type:

CUmemPoolProps

accessDescs

in: array of memory access descriptors. Used to describe peer GPU access

Type:

CUmemAccessDesc

accessDescCount

in: number of memory access descriptors. Must not exceed the number of GPUs.

Type:

size_t

bytesize

in: size in bytes of the requested allocation

Type:

size_t

dptr

out: address of the allocation returned by CUDA

Type:

CUdeviceptr

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEM_ALLOC_NODE_PARAMS_v2_st(void_ptr _ptr=0)

Memory allocation node parameters

poolProps

in: location where the allocation should reside (specified in ::location). ::handleTypes must be CU_MEM_HANDLE_TYPE_NONE. IPC is not supported.

Type:

CUmemPoolProps

accessDescs

in: array of memory access descriptors. Used to describe peer GPU access

Type:

CUmemAccessDesc

accessDescCount

in: number of memory access descriptors. Must not exceed the number of GPUs.

Type:

size_t

bytesize

in: size in bytes of the requested allocation

Type:

size_t

dptr

out: address of the allocation returned by CUDA

Type:

CUdeviceptr

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEM_FREE_NODE_PARAMS_st(void_ptr _ptr=0)

Memory free node parameters

dptr

in: the pointer to free

Type:

CUdeviceptr

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_CHILD_GRAPH_NODE_PARAMS_st(void_ptr _ptr=0)

Child graph node parameters

graph

The child graph to clone into the node for node creation, or a handle to the graph owned by the node for node query

Type:

CUgraph

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EVENT_RECORD_NODE_PARAMS_st(void_ptr _ptr=0)

Event record node parameters

event

The event to record when the node executes

Type:

CUevent

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EVENT_WAIT_NODE_PARAMS_st(void_ptr _ptr=0)

Event wait node parameters

event

The event to wait on from the node

Type:

CUevent

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphNodeParams_st(void_ptr _ptr=0)

Graph node parameters. See cuGraphAddNode.

type

Type of the node

Type:

CUgraphNodeType

reserved0

Reserved. Must be zero.

Type:

List[int]

reserved1

Padding. Unused bytes must be zero.

Type:

List[long long]

kernel

Kernel node parameters.

Type:

CUDA_KERNEL_NODE_PARAMS_v3

memcpy

Memcpy node parameters.

Type:

CUDA_MEMCPY_NODE_PARAMS

memset

Memset node parameters.

Type:

CUDA_MEMSET_NODE_PARAMS_v2

host

Host node parameters.

Type:

CUDA_HOST_NODE_PARAMS_v2

graph

Child graph node parameters.

Type:

CUDA_CHILD_GRAPH_NODE_PARAMS

eventWait

Event wait node parameters.

Type:

CUDA_EVENT_WAIT_NODE_PARAMS

eventRecord

Event record node parameters.

Type:

CUDA_EVENT_RECORD_NODE_PARAMS

extSemSignal

External semaphore signal node parameters.

Type:

CUDA_EXT_SEM_SIGNAL_NODE_PARAMS_v2

extSemWait

External semaphore wait node parameters.

Type:

CUDA_EXT_SEM_WAIT_NODE_PARAMS_v2

alloc

Memory allocation node parameters.

Type:

CUDA_MEM_ALLOC_NODE_PARAMS_v2

free

Memory free node parameters.

Type:

CUDA_MEM_FREE_NODE_PARAMS

memOp

MemOp node parameters.

Type:

CUDA_BATCH_MEM_OP_NODE_PARAMS_v2

conditional

Conditional node parameters.

Type:

CUDA_CONDITIONAL_NODE_PARAMS

reserved2

Reserved bytes. Must be zero.

Type:

long long

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUeglFrame_st(void_ptr _ptr=0)

CUDA EGLFrame structure Descriptor - structure defining one frame of EGL. Each frame may contain one or more planes depending on whether the surface * is Multiplanar or not.

frame
Type:

anon_union14

width

Width of first plane

Type:

unsigned int

height

Height of first plane

Type:

unsigned int

depth

Depth of first plane

Type:

unsigned int

pitch

Pitch of first plane

Type:

unsigned int

planeCount

Number of planes

Type:

unsigned int

numChannels

Number of channels for the plane

Type:

unsigned int

frameType

Array or Pitch

Type:

CUeglFrameType

eglColorFormat

CUDA EGL Color Format

Type:

CUeglColorFormat

cuFormat

CUDA Array Format

Type:

CUarray_format

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUipcMem_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

CUDA Ipc Mem Flags

CU_IPC_MEM_LAZY_ENABLE_PEER_ACCESS = 1

Automatically enable peer access between remote devices as needed

class cuda.bindings.driver.CUmemAttach_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

CUDA Mem Attach Flags

CU_MEM_ATTACH_GLOBAL = 1

Memory can be accessed by any stream on any device

CU_MEM_ATTACH_HOST = 2

Memory cannot be accessed by any stream on any device

CU_MEM_ATTACH_SINGLE = 4

Memory can only be accessed by a single stream on the associated device

class cuda.bindings.driver.CUctx_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Context creation flags

CU_CTX_SCHED_AUTO = 0

Automatic scheduling

CU_CTX_SCHED_SPIN = 1

Set spin as default scheduling

CU_CTX_SCHED_YIELD = 2

Set yield as default scheduling

CU_CTX_SCHED_BLOCKING_SYNC = 4

Set blocking synchronization as default scheduling

CU_CTX_BLOCKING_SYNC = 4

Set blocking synchronization as default scheduling [Deprecated]

CU_CTX_SCHED_MASK = 7
CU_CTX_MAP_HOST = 8

[Deprecated]

CU_CTX_LMEM_RESIZE_TO_MAX = 16

Keep local memory allocation after launch

CU_CTX_COREDUMP_ENABLE = 32

Trigger coredumps from exceptions in this context

CU_CTX_USER_COREDUMP_ENABLE = 64

Enable user pipe to trigger coredumps in this context

CU_CTX_SYNC_MEMOPS = 128

Ensure synchronous memory operations on this context will synchronize

CU_CTX_FLAGS_MASK = 255
class cuda.bindings.driver.CUevent_sched_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Event sched flags

CU_EVENT_SCHED_AUTO = 0

Automatic scheduling

CU_EVENT_SCHED_SPIN = 1

Set spin as default scheduling

CU_EVENT_SCHED_YIELD = 2

Set yield as default scheduling

CU_EVENT_SCHED_BLOCKING_SYNC = 4

Set blocking synchronization as default scheduling

class cuda.bindings.driver.cl_event_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

NVCL event scheduling flags

NVCL_EVENT_SCHED_AUTO = 0

Automatic scheduling

NVCL_EVENT_SCHED_SPIN = 1

Set spin as default scheduling

NVCL_EVENT_SCHED_YIELD = 2

Set yield as default scheduling

NVCL_EVENT_SCHED_BLOCKING_SYNC = 4

Set blocking synchronization as default scheduling

class cuda.bindings.driver.cl_context_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

NVCL context scheduling flags

NVCL_CTX_SCHED_AUTO = 0

Automatic scheduling

NVCL_CTX_SCHED_SPIN = 1

Set spin as default scheduling

NVCL_CTX_SCHED_YIELD = 2

Set yield as default scheduling

NVCL_CTX_SCHED_BLOCKING_SYNC = 4

Set blocking synchronization as default scheduling

class cuda.bindings.driver.CUstream_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Stream creation flags

CU_STREAM_DEFAULT = 0

Default stream flag

CU_STREAM_NON_BLOCKING = 1

Stream does not synchronize with stream 0 (the NULL stream)

class cuda.bindings.driver.CUevent_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Event creation flags

CU_EVENT_DEFAULT = 0

Default event flag

CU_EVENT_BLOCKING_SYNC = 1

Event uses blocking synchronization

CU_EVENT_DISABLE_TIMING = 2

Event will not record timing data

CU_EVENT_INTERPROCESS = 4

Event is suitable for interprocess use. CU_EVENT_DISABLE_TIMING must be set

class cuda.bindings.driver.CUevent_record_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Event record flags

CU_EVENT_RECORD_DEFAULT = 0

Default event record flag

CU_EVENT_RECORD_EXTERNAL = 1

When using stream capture, create an event record node instead of the default behavior. This flag is invalid when used outside of capture.

class cuda.bindings.driver.CUevent_wait_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Event wait flags

CU_EVENT_WAIT_DEFAULT = 0

Default event wait flag

CU_EVENT_WAIT_EXTERNAL = 1

When using stream capture, create an event wait node instead of the default behavior. This flag is invalid when used outside of capture.

class cuda.bindings.driver.CUstreamWaitValue_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for cuStreamWaitValue32 and cuStreamWaitValue64

CU_STREAM_WAIT_VALUE_GEQ = 0

Wait until (int32_t)(*addr - value) >= 0 (or int64_t for 64 bit values). Note this is a cyclic comparison which ignores wraparound. (Default behavior.)

CU_STREAM_WAIT_VALUE_EQ = 1

Wait until *addr == value.

CU_STREAM_WAIT_VALUE_AND = 2

Wait until (*addr & value) != 0.

CU_STREAM_WAIT_VALUE_NOR = 3

Wait until ~(*addr | value) != 0. Support for this operation can be queried with cuDeviceGetAttribute() and CU_DEVICE_ATTRIBUTE_CAN_USE_STREAM_WAIT_VALUE_NOR.

CU_STREAM_WAIT_VALUE_FLUSH = 1073741824

Follow the wait operation with a flush of outstanding remote writes. This means that, if a remote write operation is guaranteed to have reached the device before the wait can be satisfied, that write is guaranteed to be visible to downstream device work. The device is permitted to reorder remote writes internally. For example, this flag would be required if two remote writes arrive in a defined order, the wait is satisfied by the second write, and downstream work needs to observe the first write. Support for this operation is restricted to selected platforms and can be queried with CU_DEVICE_ATTRIBUTE_CAN_FLUSH_REMOTE_WRITES.

class cuda.bindings.driver.CUstreamWriteValue_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for cuStreamWriteValue32

CU_STREAM_WRITE_VALUE_DEFAULT = 0

Default behavior

CU_STREAM_WRITE_VALUE_NO_MEMORY_BARRIER = 1

Permits the write to be reordered with writes which were issued before it, as a performance optimization. Normally, cuStreamWriteValue32 will provide a memory fence before the write, which has similar semantics to __threadfence_system() but is scoped to the stream rather than a CUDA thread. This flag is not supported in the v2 API.

class cuda.bindings.driver.CUstreamBatchMemOpType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Operations for cuStreamBatchMemOp

CU_STREAM_MEM_OP_WAIT_VALUE_32 = 1

Represents a cuStreamWaitValue32 operation

CU_STREAM_MEM_OP_WRITE_VALUE_32 = 2

Represents a cuStreamWriteValue32 operation

CU_STREAM_MEM_OP_WAIT_VALUE_64 = 4

Represents a cuStreamWaitValue64 operation

CU_STREAM_MEM_OP_WRITE_VALUE_64 = 5

Represents a cuStreamWriteValue64 operation

CU_STREAM_MEM_OP_BARRIER = 6

Insert a memory barrier of the specified type

CU_STREAM_MEM_OP_FLUSH_REMOTE_WRITES = 3

This has the same effect as CU_STREAM_WAIT_VALUE_FLUSH, but as a standalone operation.

class cuda.bindings.driver.CUstreamMemoryBarrier_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for cuStreamMemoryBarrier

CU_STREAM_MEMORY_BARRIER_TYPE_SYS = 0

System-wide memory barrier.

CU_STREAM_MEMORY_BARRIER_TYPE_GPU = 1

Limit memory barrier scope to the GPU.

class cuda.bindings.driver.CUoccupancy_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Occupancy calculator flag

CU_OCCUPANCY_DEFAULT = 0

Default behavior

CU_OCCUPANCY_DISABLE_CACHING_OVERRIDE = 1

Assume global caching is enabled and cannot be automatically turned off

class cuda.bindings.driver.CUstreamUpdateCaptureDependencies_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for cuStreamUpdateCaptureDependencies

CU_STREAM_ADD_CAPTURE_DEPENDENCIES = 0

Add new nodes to the dependency set

CU_STREAM_SET_CAPTURE_DEPENDENCIES = 1

Replace the dependency set with the new nodes

class cuda.bindings.driver.CUasyncNotificationType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Types of async notification that can be sent

CU_ASYNC_NOTIFICATION_TYPE_OVER_BUDGET = 1
class cuda.bindings.driver.CUarray_format(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Array formats

CU_AD_FORMAT_UNSIGNED_INT8 = 1

Unsigned 8-bit integers

CU_AD_FORMAT_UNSIGNED_INT16 = 2

Unsigned 16-bit integers

CU_AD_FORMAT_UNSIGNED_INT32 = 3

Unsigned 32-bit integers

CU_AD_FORMAT_SIGNED_INT8 = 8

Signed 8-bit integers

CU_AD_FORMAT_SIGNED_INT16 = 9

Signed 16-bit integers

CU_AD_FORMAT_SIGNED_INT32 = 10

Signed 32-bit integers

CU_AD_FORMAT_HALF = 16

16-bit floating point

CU_AD_FORMAT_FLOAT = 32

32-bit floating point

CU_AD_FORMAT_NV12 = 176

8-bit YUV planar format, with 4:2:0 sampling

CU_AD_FORMAT_UNORM_INT8X1 = 192

1 channel unsigned 8-bit normalized integer

CU_AD_FORMAT_UNORM_INT8X2 = 193

2 channel unsigned 8-bit normalized integer

CU_AD_FORMAT_UNORM_INT8X4 = 194

4 channel unsigned 8-bit normalized integer

CU_AD_FORMAT_UNORM_INT16X1 = 195

1 channel unsigned 16-bit normalized integer

CU_AD_FORMAT_UNORM_INT16X2 = 196

2 channel unsigned 16-bit normalized integer

CU_AD_FORMAT_UNORM_INT16X4 = 197

4 channel unsigned 16-bit normalized integer

CU_AD_FORMAT_SNORM_INT8X1 = 198

1 channel signed 8-bit normalized integer

CU_AD_FORMAT_SNORM_INT8X2 = 199

2 channel signed 8-bit normalized integer

CU_AD_FORMAT_SNORM_INT8X4 = 200

4 channel signed 8-bit normalized integer

CU_AD_FORMAT_SNORM_INT16X1 = 201

1 channel signed 16-bit normalized integer

CU_AD_FORMAT_SNORM_INT16X2 = 202

2 channel signed 16-bit normalized integer

CU_AD_FORMAT_SNORM_INT16X4 = 203

4 channel signed 16-bit normalized integer

CU_AD_FORMAT_BC1_UNORM = 145

4 channel unsigned normalized block-compressed (BC1 compression) format

CU_AD_FORMAT_BC1_UNORM_SRGB = 146

4 channel unsigned normalized block-compressed (BC1 compression) format with sRGB encoding

CU_AD_FORMAT_BC2_UNORM = 147

4 channel unsigned normalized block-compressed (BC2 compression) format

CU_AD_FORMAT_BC2_UNORM_SRGB = 148

4 channel unsigned normalized block-compressed (BC2 compression) format with sRGB encoding

CU_AD_FORMAT_BC3_UNORM = 149

4 channel unsigned normalized block-compressed (BC3 compression) format

CU_AD_FORMAT_BC3_UNORM_SRGB = 150

4 channel unsigned normalized block-compressed (BC3 compression) format with sRGB encoding

CU_AD_FORMAT_BC4_UNORM = 151

1 channel unsigned normalized block-compressed (BC4 compression) format

CU_AD_FORMAT_BC4_SNORM = 152

1 channel signed normalized block-compressed (BC4 compression) format

CU_AD_FORMAT_BC5_UNORM = 153

2 channel unsigned normalized block-compressed (BC5 compression) format

CU_AD_FORMAT_BC5_SNORM = 154

2 channel signed normalized block-compressed (BC5 compression) format

CU_AD_FORMAT_BC6H_UF16 = 155

3 channel unsigned half-float block-compressed (BC6H compression) format

CU_AD_FORMAT_BC6H_SF16 = 156

3 channel signed half-float block-compressed (BC6H compression) format

CU_AD_FORMAT_BC7_UNORM = 157

4 channel unsigned normalized block-compressed (BC7 compression) format

CU_AD_FORMAT_BC7_UNORM_SRGB = 158

4 channel unsigned normalized block-compressed (BC7 compression) format with sRGB encoding

CU_AD_FORMAT_P010 = 159

10-bit YUV planar format, with 4:2:0 sampling

CU_AD_FORMAT_P016 = 161

16-bit YUV planar format, with 4:2:0 sampling

CU_AD_FORMAT_NV16 = 162

8-bit YUV planar format, with 4:2:2 sampling

CU_AD_FORMAT_P210 = 163

10-bit YUV planar format, with 4:2:2 sampling

CU_AD_FORMAT_P216 = 164

16-bit YUV planar format, with 4:2:2 sampling

CU_AD_FORMAT_YUY2 = 165

2 channel, 8-bit YUV packed planar format, with 4:2:2 sampling

CU_AD_FORMAT_Y210 = 166

2 channel, 10-bit YUV packed planar format, with 4:2:2 sampling

CU_AD_FORMAT_Y216 = 167

2 channel, 16-bit YUV packed planar format, with 4:2:2 sampling

CU_AD_FORMAT_AYUV = 168

4 channel, 8-bit YUV packed planar format, with 4:4:4 sampling

CU_AD_FORMAT_Y410 = 169

10-bit YUV packed planar format, with 4:4:4 sampling

CU_AD_FORMAT_Y416 = 177

4 channel, 12-bit YUV packed planar format, with 4:4:4 sampling

CU_AD_FORMAT_Y444_PLANAR8 = 178

3 channel 8-bit YUV planar format, with 4:4:4 sampling

CU_AD_FORMAT_Y444_PLANAR10 = 179

3 channel 10-bit YUV planar format, with 4:4:4 sampling

CU_AD_FORMAT_MAX = 2147483647
class cuda.bindings.driver.CUaddress_mode(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Texture reference addressing modes

CU_TR_ADDRESS_MODE_WRAP = 0

Wrapping address mode

CU_TR_ADDRESS_MODE_CLAMP = 1

Clamp to edge address mode

CU_TR_ADDRESS_MODE_MIRROR = 2

Mirror address mode

CU_TR_ADDRESS_MODE_BORDER = 3

Border address mode

class cuda.bindings.driver.CUfilter_mode(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Texture reference filtering modes

CU_TR_FILTER_MODE_POINT = 0

Point filter mode

CU_TR_FILTER_MODE_LINEAR = 1

Linear filter mode

class cuda.bindings.driver.CUdevice_attribute(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Device properties

CU_DEVICE_ATTRIBUTE_MAX_THREADS_PER_BLOCK = 1

Maximum number of threads per block

CU_DEVICE_ATTRIBUTE_MAX_BLOCK_DIM_X = 2

Maximum block dimension X

CU_DEVICE_ATTRIBUTE_MAX_BLOCK_DIM_Y = 3

Maximum block dimension Y

CU_DEVICE_ATTRIBUTE_MAX_BLOCK_DIM_Z = 4

Maximum block dimension Z

CU_DEVICE_ATTRIBUTE_MAX_GRID_DIM_X = 5

Maximum grid dimension X

CU_DEVICE_ATTRIBUTE_MAX_GRID_DIM_Y = 6

Maximum grid dimension Y

CU_DEVICE_ATTRIBUTE_MAX_GRID_DIM_Z = 7

Maximum grid dimension Z

CU_DEVICE_ATTRIBUTE_MAX_SHARED_MEMORY_PER_BLOCK = 8

Maximum shared memory available per block in bytes

CU_DEVICE_ATTRIBUTE_SHARED_MEMORY_PER_BLOCK = 8

Deprecated, use CU_DEVICE_ATTRIBUTE_MAX_SHARED_MEMORY_PER_BLOCK

CU_DEVICE_ATTRIBUTE_TOTAL_CONSTANT_MEMORY = 9

Memory available on device for constant variables in a CUDA C kernel in bytes

CU_DEVICE_ATTRIBUTE_WARP_SIZE = 10

Warp size in threads

CU_DEVICE_ATTRIBUTE_MAX_PITCH = 11

Maximum pitch in bytes allowed by memory copies

CU_DEVICE_ATTRIBUTE_MAX_REGISTERS_PER_BLOCK = 12

Maximum number of 32-bit registers available per block

CU_DEVICE_ATTRIBUTE_REGISTERS_PER_BLOCK = 12

Deprecated, use CU_DEVICE_ATTRIBUTE_MAX_REGISTERS_PER_BLOCK

CU_DEVICE_ATTRIBUTE_CLOCK_RATE = 13

Typical clock frequency in kilohertz

CU_DEVICE_ATTRIBUTE_TEXTURE_ALIGNMENT = 14

Alignment requirement for textures

CU_DEVICE_ATTRIBUTE_GPU_OVERLAP = 15

Device can possibly copy memory and execute a kernel concurrently. Deprecated. Use instead CU_DEVICE_ATTRIBUTE_ASYNC_ENGINE_COUNT.

CU_DEVICE_ATTRIBUTE_MULTIPROCESSOR_COUNT = 16

Number of multiprocessors on device

CU_DEVICE_ATTRIBUTE_KERNEL_EXEC_TIMEOUT = 17

Specifies whether there is a run time limit on kernels

CU_DEVICE_ATTRIBUTE_INTEGRATED = 18

Device is integrated with host memory

CU_DEVICE_ATTRIBUTE_CAN_MAP_HOST_MEMORY = 19

Device can map host memory into CUDA address space

CU_DEVICE_ATTRIBUTE_COMPUTE_MODE = 20

Compute mode (See CUcomputemode for details)

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE1D_WIDTH = 21

Maximum 1D texture width

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_WIDTH = 22

Maximum 2D texture width

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_HEIGHT = 23

Maximum 2D texture height

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE3D_WIDTH = 24

Maximum 3D texture width

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE3D_HEIGHT = 25

Maximum 3D texture height

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE3D_DEPTH = 26

Maximum 3D texture depth

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LAYERED_WIDTH = 27

Maximum 2D layered texture width

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LAYERED_HEIGHT = 28

Maximum 2D layered texture height

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LAYERED_LAYERS = 29

Maximum layers in a 2D layered texture

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_ARRAY_WIDTH = 27

Deprecated, use CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LAYERED_WIDTH

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_ARRAY_HEIGHT = 28

Deprecated, use CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LAYERED_HEIGHT

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_ARRAY_NUMSLICES = 29

Deprecated, use CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LAYERED_LAYERS

CU_DEVICE_ATTRIBUTE_SURFACE_ALIGNMENT = 30

Alignment requirement for surfaces

CU_DEVICE_ATTRIBUTE_CONCURRENT_KERNELS = 31

Device can possibly execute multiple kernels concurrently

CU_DEVICE_ATTRIBUTE_ECC_ENABLED = 32

Device has ECC support enabled

CU_DEVICE_ATTRIBUTE_PCI_BUS_ID = 33

PCI bus ID of the device

CU_DEVICE_ATTRIBUTE_PCI_DEVICE_ID = 34

PCI device ID of the device

CU_DEVICE_ATTRIBUTE_TCC_DRIVER = 35

Device is using TCC driver model

CU_DEVICE_ATTRIBUTE_MEMORY_CLOCK_RATE = 36

Peak memory clock frequency in kilohertz

CU_DEVICE_ATTRIBUTE_GLOBAL_MEMORY_BUS_WIDTH = 37

Global memory bus width in bits

CU_DEVICE_ATTRIBUTE_L2_CACHE_SIZE = 38

Size of L2 cache in bytes

CU_DEVICE_ATTRIBUTE_MAX_THREADS_PER_MULTIPROCESSOR = 39

Maximum resident threads per multiprocessor

CU_DEVICE_ATTRIBUTE_ASYNC_ENGINE_COUNT = 40

Number of asynchronous engines

CU_DEVICE_ATTRIBUTE_UNIFIED_ADDRESSING = 41

Device shares a unified address space with the host

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE1D_LAYERED_WIDTH = 42

Maximum 1D layered texture width

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE1D_LAYERED_LAYERS = 43

Maximum layers in a 1D layered texture

CU_DEVICE_ATTRIBUTE_CAN_TEX2D_GATHER = 44

Deprecated, do not use.

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_GATHER_WIDTH = 45

Maximum 2D texture width if CUDA_ARRAY3D_TEXTURE_GATHER is set

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_GATHER_HEIGHT = 46

Maximum 2D texture height if CUDA_ARRAY3D_TEXTURE_GATHER is set

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE3D_WIDTH_ALTERNATE = 47

Alternate maximum 3D texture width

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE3D_HEIGHT_ALTERNATE = 48

Alternate maximum 3D texture height

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE3D_DEPTH_ALTERNATE = 49

Alternate maximum 3D texture depth

CU_DEVICE_ATTRIBUTE_PCI_DOMAIN_ID = 50

PCI domain ID of the device

CU_DEVICE_ATTRIBUTE_TEXTURE_PITCH_ALIGNMENT = 51

Pitch alignment requirement for textures

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURECUBEMAP_WIDTH = 52

Maximum cubemap texture width/height

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURECUBEMAP_LAYERED_WIDTH = 53

Maximum cubemap layered texture width/height

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURECUBEMAP_LAYERED_LAYERS = 54

Maximum layers in a cubemap layered texture

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE1D_WIDTH = 55

Maximum 1D surface width

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE2D_WIDTH = 56

Maximum 2D surface width

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE2D_HEIGHT = 57

Maximum 2D surface height

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE3D_WIDTH = 58

Maximum 3D surface width

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE3D_HEIGHT = 59

Maximum 3D surface height

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE3D_DEPTH = 60

Maximum 3D surface depth

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE1D_LAYERED_WIDTH = 61

Maximum 1D layered surface width

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE1D_LAYERED_LAYERS = 62

Maximum layers in a 1D layered surface

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE2D_LAYERED_WIDTH = 63

Maximum 2D layered surface width

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE2D_LAYERED_HEIGHT = 64

Maximum 2D layered surface height

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACE2D_LAYERED_LAYERS = 65

Maximum layers in a 2D layered surface

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACECUBEMAP_WIDTH = 66

Maximum cubemap surface width

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACECUBEMAP_LAYERED_WIDTH = 67

Maximum cubemap layered surface width

CU_DEVICE_ATTRIBUTE_MAXIMUM_SURFACECUBEMAP_LAYERED_LAYERS = 68

Maximum layers in a cubemap layered surface

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE1D_LINEAR_WIDTH = 69

Deprecated, do not use. Use cudaDeviceGetTexture1DLinearMaxWidth() or cuDeviceGetTexture1DLinearMaxWidth() instead.

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LINEAR_WIDTH = 70

Maximum 2D linear texture width

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LINEAR_HEIGHT = 71

Maximum 2D linear texture height

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LINEAR_PITCH = 72

Maximum 2D linear texture pitch in bytes

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_MIPMAPPED_WIDTH = 73

Maximum mipmapped 2D texture width

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_MIPMAPPED_HEIGHT = 74

Maximum mipmapped 2D texture height

CU_DEVICE_ATTRIBUTE_COMPUTE_CAPABILITY_MAJOR = 75

Major compute capability version number

CU_DEVICE_ATTRIBUTE_COMPUTE_CAPABILITY_MINOR = 76

Minor compute capability version number

CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE1D_MIPMAPPED_WIDTH = 77

Maximum mipmapped 1D texture width

CU_DEVICE_ATTRIBUTE_STREAM_PRIORITIES_SUPPORTED = 78

Device supports stream priorities

CU_DEVICE_ATTRIBUTE_GLOBAL_L1_CACHE_SUPPORTED = 79

Device supports caching globals in L1

CU_DEVICE_ATTRIBUTE_LOCAL_L1_CACHE_SUPPORTED = 80

Device supports caching locals in L1

CU_DEVICE_ATTRIBUTE_MAX_SHARED_MEMORY_PER_MULTIPROCESSOR = 81

Maximum shared memory available per multiprocessor in bytes

CU_DEVICE_ATTRIBUTE_MAX_REGISTERS_PER_MULTIPROCESSOR = 82

Maximum number of 32-bit registers available per multiprocessor

CU_DEVICE_ATTRIBUTE_MANAGED_MEMORY = 83

Device can allocate managed memory on this system

CU_DEVICE_ATTRIBUTE_MULTI_GPU_BOARD = 84

Device is on a multi-GPU board

CU_DEVICE_ATTRIBUTE_MULTI_GPU_BOARD_GROUP_ID = 85

Unique id for a group of devices on the same multi-GPU board

CU_DEVICE_ATTRIBUTE_HOST_NATIVE_ATOMIC_SUPPORTED = 86

Link between the device and the host supports native atomic operations (this is a placeholder attribute, and is not supported on any current hardware)

CU_DEVICE_ATTRIBUTE_SINGLE_TO_DOUBLE_PRECISION_PERF_RATIO = 87

Ratio of single precision performance (in floating-point operations per second) to double precision performance

CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS = 88

Device supports coherently accessing pageable memory without calling cudaHostRegister on it

CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS = 89

Device can coherently access managed memory concurrently with the CPU

CU_DEVICE_ATTRIBUTE_COMPUTE_PREEMPTION_SUPPORTED = 90

Device supports compute preemption.

CU_DEVICE_ATTRIBUTE_CAN_USE_HOST_POINTER_FOR_REGISTERED_MEM = 91

Device can access host registered memory at the same virtual address as the CPU

CU_DEVICE_ATTRIBUTE_CAN_USE_STREAM_MEM_OPS_V1 = 92

Deprecated, along with v1 MemOps API, cuStreamBatchMemOp and related APIs are supported.

CU_DEVICE_ATTRIBUTE_CAN_USE_64_BIT_STREAM_MEM_OPS_V1 = 93

Deprecated, along with v1 MemOps API, 64-bit operations are supported in cuStreamBatchMemOp and related APIs.

CU_DEVICE_ATTRIBUTE_CAN_USE_STREAM_WAIT_VALUE_NOR_V1 = 94

Deprecated, along with v1 MemOps API, CU_STREAM_WAIT_VALUE_NOR is supported.

CU_DEVICE_ATTRIBUTE_COOPERATIVE_LAUNCH = 95

Device supports launching cooperative kernels via cuLaunchCooperativeKernel

CU_DEVICE_ATTRIBUTE_COOPERATIVE_MULTI_DEVICE_LAUNCH = 96

Deprecated, cuLaunchCooperativeKernelMultiDevice is deprecated.

CU_DEVICE_ATTRIBUTE_MAX_SHARED_MEMORY_PER_BLOCK_OPTIN = 97

Maximum optin shared memory per block

CU_DEVICE_ATTRIBUTE_CAN_FLUSH_REMOTE_WRITES = 98

The CU_STREAM_WAIT_VALUE_FLUSH flag and the CU_STREAM_MEM_OP_FLUSH_REMOTE_WRITES MemOp are supported on the device. See Stream Memory Operations for additional details.

CU_DEVICE_ATTRIBUTE_HOST_REGISTER_SUPPORTED = 99

Device supports host memory registration via cudaHostRegister.

CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES = 100

Device accesses pageable memory via the host’s page tables.

CU_DEVICE_ATTRIBUTE_DIRECT_MANAGED_MEM_ACCESS_FROM_HOST = 101

The host can directly access managed memory on the device without migration.

CU_DEVICE_ATTRIBUTE_VIRTUAL_ADDRESS_MANAGEMENT_SUPPORTED = 102

Deprecated, Use CU_DEVICE_ATTRIBUTE_VIRTUAL_MEMORY_MANAGEMENT_SUPPORTED

CU_DEVICE_ATTRIBUTE_VIRTUAL_MEMORY_MANAGEMENT_SUPPORTED = 102

Device supports virtual memory management APIs like cuMemAddressReserve, cuMemCreate, cuMemMap and related APIs

CU_DEVICE_ATTRIBUTE_HANDLE_TYPE_POSIX_FILE_DESCRIPTOR_SUPPORTED = 103

Device supports exporting memory to a posix file descriptor with cuMemExportToShareableHandle, if requested via cuMemCreate

CU_DEVICE_ATTRIBUTE_HANDLE_TYPE_WIN32_HANDLE_SUPPORTED = 104

Device supports exporting memory to a Win32 NT handle with cuMemExportToShareableHandle, if requested via cuMemCreate

CU_DEVICE_ATTRIBUTE_HANDLE_TYPE_WIN32_KMT_HANDLE_SUPPORTED = 105

Device supports exporting memory to a Win32 KMT handle with cuMemExportToShareableHandle, if requested via cuMemCreate

CU_DEVICE_ATTRIBUTE_MAX_BLOCKS_PER_MULTIPROCESSOR = 106

Maximum number of blocks per multiprocessor

CU_DEVICE_ATTRIBUTE_GENERIC_COMPRESSION_SUPPORTED = 107

Device supports compression of memory

CU_DEVICE_ATTRIBUTE_MAX_PERSISTING_L2_CACHE_SIZE = 108

Maximum L2 persisting lines capacity setting in bytes.

CU_DEVICE_ATTRIBUTE_MAX_ACCESS_POLICY_WINDOW_SIZE = 109

Maximum value of num_bytes.

CU_DEVICE_ATTRIBUTE_GPU_DIRECT_RDMA_WITH_CUDA_VMM_SUPPORTED = 110

Device supports specifying the GPUDirect RDMA flag with cuMemCreate

CU_DEVICE_ATTRIBUTE_RESERVED_SHARED_MEMORY_PER_BLOCK = 111

Shared memory reserved by CUDA driver per block in bytes

CU_DEVICE_ATTRIBUTE_SPARSE_CUDA_ARRAY_SUPPORTED = 112

Device supports sparse CUDA arrays and sparse CUDA mipmapped arrays

CU_DEVICE_ATTRIBUTE_READ_ONLY_HOST_REGISTER_SUPPORTED = 113

Device supports using the cuMemHostRegister flag CU_MEMHOSTERGISTER_READ_ONLY to register memory that must be mapped as read-only to the GPU

CU_DEVICE_ATTRIBUTE_TIMELINE_SEMAPHORE_INTEROP_SUPPORTED = 114

External timeline semaphore interop is supported on the device

CU_DEVICE_ATTRIBUTE_MEMORY_POOLS_SUPPORTED = 115

Device supports using the cuMemAllocAsync and cuMemPool family of APIs

CU_DEVICE_ATTRIBUTE_GPU_DIRECT_RDMA_SUPPORTED = 116

Device supports GPUDirect RDMA APIs, like nvidia_p2p_get_pages (see https://docs.nvidia.com/cuda/gpudirect-rdma for more information)

CU_DEVICE_ATTRIBUTE_GPU_DIRECT_RDMA_FLUSH_WRITES_OPTIONS = 117

The returned attribute shall be interpreted as a bitmask, where the individual bits are described by the CUflushGPUDirectRDMAWritesOptions enum

CU_DEVICE_ATTRIBUTE_GPU_DIRECT_RDMA_WRITES_ORDERING = 118

GPUDirect RDMA writes to the device do not need to be flushed for consumers within the scope indicated by the returned attribute. See CUGPUDirectRDMAWritesOrdering for the numerical values returned here.

CU_DEVICE_ATTRIBUTE_MEMPOOL_SUPPORTED_HANDLE_TYPES = 119

Handle types supported with mempool based IPC

CU_DEVICE_ATTRIBUTE_CLUSTER_LAUNCH = 120

Indicates device supports cluster launch

CU_DEVICE_ATTRIBUTE_DEFERRED_MAPPING_CUDA_ARRAY_SUPPORTED = 121

Device supports deferred mapping CUDA arrays and CUDA mipmapped arrays

CU_DEVICE_ATTRIBUTE_CAN_USE_64_BIT_STREAM_MEM_OPS = 122

64-bit operations are supported in cuStreamBatchMemOp and related MemOp APIs.

CU_DEVICE_ATTRIBUTE_CAN_USE_STREAM_WAIT_VALUE_NOR = 123

CU_STREAM_WAIT_VALUE_NOR is supported by MemOp APIs.

CU_DEVICE_ATTRIBUTE_DMA_BUF_SUPPORTED = 124

Device supports buffer sharing with dma_buf mechanism.

CU_DEVICE_ATTRIBUTE_IPC_EVENT_SUPPORTED = 125

Device supports IPC Events.

CU_DEVICE_ATTRIBUTE_MEM_SYNC_DOMAIN_COUNT = 126

Number of memory domains the device supports.

CU_DEVICE_ATTRIBUTE_TENSOR_MAP_ACCESS_SUPPORTED = 127

Device supports accessing memory using Tensor Map.

CU_DEVICE_ATTRIBUTE_HANDLE_TYPE_FABRIC_SUPPORTED = 128

Device supports exporting memory to a fabric handle with cuMemExportToShareableHandle() or requested with cuMemCreate()

CU_DEVICE_ATTRIBUTE_UNIFIED_FUNCTION_POINTERS = 129

Device supports unified function pointers.

CU_DEVICE_ATTRIBUTE_NUMA_CONFIG = 130

NUMA configuration of a device: value is of type CUdeviceNumaConfig enum

CU_DEVICE_ATTRIBUTE_NUMA_ID = 131

NUMA node ID of the GPU memory

CU_DEVICE_ATTRIBUTE_MULTICAST_SUPPORTED = 132

Device supports switch multicast and reduction operations.

CU_DEVICE_ATTRIBUTE_MPS_ENABLED = 133

Indicates if contexts created on this device will be shared via MPS

CU_DEVICE_ATTRIBUTE_HOST_NUMA_ID = 134

NUMA ID of the host node closest to the device. Returns -1 when system does not support NUMA.

CU_DEVICE_ATTRIBUTE_D3D12_CIG_SUPPORTED = 135

Device supports CIG with D3D12.

CU_DEVICE_ATTRIBUTE_MAX = 136
class cuda.bindings.driver.CUpointer_attribute(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Pointer information

CU_POINTER_ATTRIBUTE_CONTEXT = 1

The CUcontext on which a pointer was allocated or registered

CU_POINTER_ATTRIBUTE_MEMORY_TYPE = 2

The CUmemorytype describing the physical location of a pointer

CU_POINTER_ATTRIBUTE_DEVICE_POINTER = 3

The address at which a pointer’s memory may be accessed on the device

CU_POINTER_ATTRIBUTE_HOST_POINTER = 4

The address at which a pointer’s memory may be accessed on the host

CU_POINTER_ATTRIBUTE_P2P_TOKENS = 5

A pair of tokens for use with the nv-p2p.h Linux kernel interface

CU_POINTER_ATTRIBUTE_SYNC_MEMOPS = 6

Synchronize every synchronous memory operation initiated on this region

CU_POINTER_ATTRIBUTE_BUFFER_ID = 7

A process-wide unique ID for an allocated memory region

CU_POINTER_ATTRIBUTE_IS_MANAGED = 8

Indicates if the pointer points to managed memory

CU_POINTER_ATTRIBUTE_DEVICE_ORDINAL = 9

A device ordinal of a device on which a pointer was allocated or registered

CU_POINTER_ATTRIBUTE_IS_LEGACY_CUDA_IPC_CAPABLE = 10

1 if this pointer maps to an allocation that is suitable for cudaIpcGetMemHandle, 0 otherwise

CU_POINTER_ATTRIBUTE_RANGE_START_ADDR = 11

Starting address for this requested pointer

CU_POINTER_ATTRIBUTE_RANGE_SIZE = 12

Size of the address range for this requested pointer

CU_POINTER_ATTRIBUTE_MAPPED = 13

1 if this pointer is in a valid address range that is mapped to a backing allocation, 0 otherwise

CU_POINTER_ATTRIBUTE_ALLOWED_HANDLE_TYPES = 14

Bitmask of allowed CUmemAllocationHandleType for this allocation

CU_POINTER_ATTRIBUTE_IS_GPU_DIRECT_RDMA_CAPABLE = 15

1 if the memory this pointer is referencing can be used with the GPUDirect RDMA API

CU_POINTER_ATTRIBUTE_ACCESS_FLAGS = 16

Returns the access flags the device associated with the current context has on the corresponding memory referenced by the pointer given

CU_POINTER_ATTRIBUTE_MEMPOOL_HANDLE = 17

Returns the mempool handle for the allocation if it was allocated from a mempool. Otherwise returns NULL.

CU_POINTER_ATTRIBUTE_MAPPING_SIZE = 18

Size of the actual underlying mapping that the pointer belongs to

CU_POINTER_ATTRIBUTE_MAPPING_BASE_ADDR = 19

The start address of the mapping that the pointer belongs to

CU_POINTER_ATTRIBUTE_MEMORY_BLOCK_ID = 20

A process-wide unique id corresponding to the physical allocation the pointer belongs to

class cuda.bindings.driver.CUfunction_attribute(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Function properties

CU_FUNC_ATTRIBUTE_MAX_THREADS_PER_BLOCK = 0

The maximum number of threads per block, beyond which a launch of the function would fail. This number depends on both the function and the device on which the function is currently loaded.

CU_FUNC_ATTRIBUTE_SHARED_SIZE_BYTES = 1

The size in bytes of statically-allocated shared memory required by this function. This does not include dynamically-allocated shared memory requested by the user at runtime.

CU_FUNC_ATTRIBUTE_CONST_SIZE_BYTES = 2

The size in bytes of user-allocated constant memory required by this function.

CU_FUNC_ATTRIBUTE_LOCAL_SIZE_BYTES = 3

The size in bytes of local memory used by each thread of this function.

CU_FUNC_ATTRIBUTE_NUM_REGS = 4

The number of registers used by each thread of this function.

CU_FUNC_ATTRIBUTE_PTX_VERSION = 5

The PTX virtual architecture version for which the function was compiled. This value is the major PTX version * 10 + the minor PTX version, so a PTX version 1.3 function would return the value 13. Note that this may return the undefined value of 0 for cubins compiled prior to CUDA 3.0.

CU_FUNC_ATTRIBUTE_BINARY_VERSION = 6

The binary architecture version for which the function was compiled. This value is the major binary version * 10 + the minor binary version, so a binary version 1.3 function would return the value 13. Note that this will return a value of 10 for legacy cubins that do not have a properly-encoded binary architecture version.

CU_FUNC_ATTRIBUTE_CACHE_MODE_CA = 7

The attribute to indicate whether the function has been compiled with user specified option “-Xptxas –dlcm=ca” set .

CU_FUNC_ATTRIBUTE_MAX_DYNAMIC_SHARED_SIZE_BYTES = 8

The maximum size in bytes of dynamically-allocated shared memory that can be used by this function. If the user-specified dynamic shared memory size is larger than this value, the launch will fail. See cuFuncSetAttribute, cuKernelSetAttribute

CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT = 9

On devices where the L1 cache and shared memory use the same hardware resources, this sets the shared memory carveout preference, in percent of the total shared memory. Refer to CU_DEVICE_ATTRIBUTE_MAX_SHARED_MEMORY_PER_MULTIPROCESSOR. This is only a hint, and the driver can choose a different ratio if required to execute the function. See cuFuncSetAttribute, cuKernelSetAttribute

CU_FUNC_ATTRIBUTE_CLUSTER_SIZE_MUST_BE_SET = 10

If this attribute is set, the kernel must launch with a valid cluster size specified. See cuFuncSetAttribute, cuKernelSetAttribute

CU_FUNC_ATTRIBUTE_REQUIRED_CLUSTER_WIDTH = 11

The required cluster width in blocks. The values must either all be 0 or all be positive. The validity of the cluster dimensions is otherwise checked at launch time.

If the value is set during compile time, it cannot be set at runtime. Setting it at runtime will return CUDA_ERROR_NOT_PERMITTED. See cuFuncSetAttribute, cuKernelSetAttribute

CU_FUNC_ATTRIBUTE_REQUIRED_CLUSTER_HEIGHT = 12

The required cluster height in blocks. The values must either all be 0 or all be positive. The validity of the cluster dimensions is otherwise checked at launch time.

If the value is set during compile time, it cannot be set at runtime. Setting it at runtime should return CUDA_ERROR_NOT_PERMITTED. See cuFuncSetAttribute, cuKernelSetAttribute

CU_FUNC_ATTRIBUTE_REQUIRED_CLUSTER_DEPTH = 13

The required cluster depth in blocks. The values must either all be 0 or all be positive. The validity of the cluster dimensions is otherwise checked at launch time.

If the value is set during compile time, it cannot be set at runtime. Setting it at runtime should return CUDA_ERROR_NOT_PERMITTED. See cuFuncSetAttribute, cuKernelSetAttribute

CU_FUNC_ATTRIBUTE_NON_PORTABLE_CLUSTER_SIZE_ALLOWED = 14

Whether the function can be launched with non-portable cluster size. 1 is allowed, 0 is disallowed. A non-portable cluster size may only function on the specific SKUs the program is tested on. The launch might fail if the program is run on a different hardware platform.

CUDA API provides cudaOccupancyMaxActiveClusters to assist with checking whether the desired size can be launched on the current device.

Portable Cluster Size

A portable cluster size is guaranteed to be functional on all compute capabilities higher than the target compute capability. The portable cluster size for sm_90 is 8 blocks per cluster. This value may increase for future compute capabilities.

The specific hardware unit may support higher cluster sizes that’s not guaranteed to be portable. See cuFuncSetAttribute, cuKernelSetAttribute

CU_FUNC_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE = 15

The block scheduling policy of a function. The value type is CUclusterSchedulingPolicy / cudaClusterSchedulingPolicy. See cuFuncSetAttribute, cuKernelSetAttribute

CU_FUNC_ATTRIBUTE_MAX = 16
class cuda.bindings.driver.CUfunc_cache(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Function cache configurations

CU_FUNC_CACHE_PREFER_NONE = 0

no preference for shared memory or L1 (default)

CU_FUNC_CACHE_PREFER_SHARED = 1

prefer larger shared memory and smaller L1 cache

CU_FUNC_CACHE_PREFER_L1 = 2

prefer larger L1 cache and smaller shared memory

CU_FUNC_CACHE_PREFER_EQUAL = 3

prefer equal sized L1 cache and shared memory

class cuda.bindings.driver.CUsharedconfig(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

[Deprecated] Shared memory configurations

CU_SHARED_MEM_CONFIG_DEFAULT_BANK_SIZE = 0

set default shared memory bank size

CU_SHARED_MEM_CONFIG_FOUR_BYTE_BANK_SIZE = 1

set shared memory bank width to four bytes

CU_SHARED_MEM_CONFIG_EIGHT_BYTE_BANK_SIZE = 2

set shared memory bank width to eight bytes

class cuda.bindings.driver.CUshared_carveout(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Shared memory carveout configurations. These may be passed to cuFuncSetAttribute or cuKernelSetAttribute

CU_SHAREDMEM_CARVEOUT_DEFAULT = -1

No preference for shared memory or L1 (default)

CU_SHAREDMEM_CARVEOUT_MAX_SHARED = 100

Prefer maximum available shared memory, minimum L1 cache

CU_SHAREDMEM_CARVEOUT_MAX_L1 = 0

Prefer maximum available L1 cache, minimum shared memory

class cuda.bindings.driver.CUmemorytype(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Memory types

CU_MEMORYTYPE_HOST = 1

Host memory

CU_MEMORYTYPE_DEVICE = 2

Device memory

CU_MEMORYTYPE_ARRAY = 3

Array memory

CU_MEMORYTYPE_UNIFIED = 4

Unified device or host memory

class cuda.bindings.driver.CUcomputemode(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Compute Modes

CU_COMPUTEMODE_DEFAULT = 0

Default compute mode (Multiple contexts allowed per device)

CU_COMPUTEMODE_PROHIBITED = 2

Compute-prohibited mode (No contexts can be created on this device at this time)

CU_COMPUTEMODE_EXCLUSIVE_PROCESS = 3

Compute-exclusive-process mode (Only one context used by a single process can be present on this device at a time)

class cuda.bindings.driver.CUmem_advise(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Memory advise values

CU_MEM_ADVISE_SET_READ_MOSTLY = 1

Data will mostly be read and only occasionally be written to

CU_MEM_ADVISE_UNSET_READ_MOSTLY = 2

Undo the effect of CU_MEM_ADVISE_SET_READ_MOSTLY

CU_MEM_ADVISE_SET_PREFERRED_LOCATION = 3

Set the preferred location for the data as the specified device

CU_MEM_ADVISE_UNSET_PREFERRED_LOCATION = 4

Clear the preferred location for the data

CU_MEM_ADVISE_SET_ACCESSED_BY = 5

Data will be accessed by the specified device, so prevent page faults as much as possible

CU_MEM_ADVISE_UNSET_ACCESSED_BY = 6

Let the Unified Memory subsystem decide on the page faulting policy for the specified device

class cuda.bindings.driver.CUmem_range_attribute(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
CU_MEM_RANGE_ATTRIBUTE_READ_MOSTLY = 1

Whether the range will mostly be read and only occasionally be written to

CU_MEM_RANGE_ATTRIBUTE_PREFERRED_LOCATION = 2

The preferred location of the range

CU_MEM_RANGE_ATTRIBUTE_ACCESSED_BY = 3

Memory range has CU_MEM_ADVISE_SET_ACCESSED_BY set for specified device

CU_MEM_RANGE_ATTRIBUTE_LAST_PREFETCH_LOCATION = 4

The last location to which the range was prefetched

CU_MEM_RANGE_ATTRIBUTE_PREFERRED_LOCATION_TYPE = 5

The preferred location type of the range

CU_MEM_RANGE_ATTRIBUTE_PREFERRED_LOCATION_ID = 6

The preferred location id of the range

CU_MEM_RANGE_ATTRIBUTE_LAST_PREFETCH_LOCATION_TYPE = 7

The last location type to which the range was prefetched

CU_MEM_RANGE_ATTRIBUTE_LAST_PREFETCH_LOCATION_ID = 8

The last location id to which the range was prefetched

class cuda.bindings.driver.CUjit_option(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Online compiler and linker options

CU_JIT_MAX_REGISTERS = 0

Max number of registers that a thread may use.

Option type: unsigned int

Applies to: compiler only

CU_JIT_THREADS_PER_BLOCK = 1

IN: Specifies minimum number of threads per block to target compilation for

OUT: Returns the number of threads the compiler actually targeted. This restricts the resource utilization of the compiler (e.g. max registers) such that a block with the given number of threads should be able to launch based on register limitations. Note, this option does not currently take into account any other resource limitations, such as shared memory utilization.

Cannot be combined with CU_JIT_TARGET.

Option type: unsigned int

Applies to: compiler only

CU_JIT_WALL_TIME = 2

Overwrites the option value with the total wall clock time, in milliseconds, spent in the compiler and linker

Option type: float

Applies to: compiler and linker

CU_JIT_INFO_LOG_BUFFER = 3

Pointer to a buffer in which to print any log messages that are informational in nature (the buffer size is specified via option CU_JIT_INFO_LOG_BUFFER_SIZE_BYTES)

Option type: char *

Applies to: compiler and linker

CU_JIT_INFO_LOG_BUFFER_SIZE_BYTES = 4

IN: Log buffer size in bytes. Log messages will be capped at this size (including null terminator)

OUT: Amount of log buffer filled with messages

Option type: unsigned int

Applies to: compiler and linker

CU_JIT_ERROR_LOG_BUFFER = 5

Pointer to a buffer in which to print any log messages that reflect errors (the buffer size is specified via option CU_JIT_ERROR_LOG_BUFFER_SIZE_BYTES)

Option type: char *

Applies to: compiler and linker

CU_JIT_ERROR_LOG_BUFFER_SIZE_BYTES = 6

IN: Log buffer size in bytes. Log messages will be capped at this size (including null terminator)

OUT: Amount of log buffer filled with messages

Option type: unsigned int

Applies to: compiler and linker

CU_JIT_OPTIMIZATION_LEVEL = 7

Level of optimizations to apply to generated code (0 - 4), with 4 being the default and highest level of optimizations.

Option type: unsigned int

Applies to: compiler only

CU_JIT_TARGET_FROM_CUCONTEXT = 8

No option value required. Determines the target based on the current attached context (default)

Option type: No option value needed

Applies to: compiler and linker

CU_JIT_TARGET = 9

Target is chosen based on supplied CUjit_target. Cannot be combined with CU_JIT_THREADS_PER_BLOCK.

Option type: unsigned int for enumerated type CUjit_target

Applies to: compiler and linker

CU_JIT_FALLBACK_STRATEGY = 10

Specifies choice of fallback strategy if matching cubin is not found. Choice is based on supplied CUjit_fallback. This option cannot be used with cuLink* APIs as the linker requires exact matches.

Option type: unsigned int for enumerated type CUjit_fallback

Applies to: compiler only

CU_JIT_GENERATE_DEBUG_INFO = 11

Specifies whether to create debug information in output (-g) (0: false, default)

Option type: int

Applies to: compiler and linker

CU_JIT_LOG_VERBOSE = 12

Generate verbose log messages (0: false, default)

Option type: int

Applies to: compiler and linker

CU_JIT_GENERATE_LINE_INFO = 13

Generate line number information (-lineinfo) (0: false, default)

Option type: int

Applies to: compiler only

CU_JIT_CACHE_MODE = 14

Specifies whether to enable caching explicitly (-dlcm)

Choice is based on supplied CUjit_cacheMode_enum.

Option type: unsigned int for enumerated type CUjit_cacheMode_enum

Applies to: compiler only

CU_JIT_NEW_SM3X_OPT = 15

[Deprecated]

CU_JIT_FAST_COMPILE = 16

This jit option is used for internal purpose only.

CU_JIT_GLOBAL_SYMBOL_NAMES = 17

Array of device symbol names that will be relocated to the corresponding host addresses stored in CU_JIT_GLOBAL_SYMBOL_ADDRESSES.

Must contain CU_JIT_GLOBAL_SYMBOL_COUNT entries.

When loading a device module, driver will relocate all encountered unresolved symbols to the host addresses.

It is only allowed to register symbols that correspond to unresolved global variables.

It is illegal to register the same device symbol at multiple addresses.

Option type: const char **

Applies to: dynamic linker only

CU_JIT_GLOBAL_SYMBOL_ADDRESSES = 18

Array of host addresses that will be used to relocate corresponding device symbols stored in CU_JIT_GLOBAL_SYMBOL_NAMES.

Must contain CU_JIT_GLOBAL_SYMBOL_COUNT entries.

Option type: void **

Applies to: dynamic linker only

CU_JIT_GLOBAL_SYMBOL_COUNT = 19

Number of entries in CU_JIT_GLOBAL_SYMBOL_NAMES and CU_JIT_GLOBAL_SYMBOL_ADDRESSES arrays.

Option type: unsigned int

Applies to: dynamic linker only

CU_JIT_LTO = 20

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_FTZ = 21

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_PREC_DIV = 22

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_PREC_SQRT = 23

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_FMA = 24

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_REFERENCED_KERNEL_NAMES = 25

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_REFERENCED_KERNEL_COUNT = 26

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_REFERENCED_VARIABLE_NAMES = 27

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_REFERENCED_VARIABLE_COUNT = 28

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_OPTIMIZE_UNUSED_DEVICE_VARIABLES = 29

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_POSITION_INDEPENDENT_CODE = 30

Generate position independent code (0: false)

Option type: int

Applies to: compiler only

CU_JIT_MIN_CTA_PER_SM = 31

This option hints to the JIT compiler the minimum number of CTAs from the kernel’s grid to be mapped to a SM. This option is ignored when used together with CU_JIT_MAX_REGISTERS or CU_JIT_THREADS_PER_BLOCK. Optimizations based on this option need CU_JIT_MAX_THREADS_PER_BLOCK to be specified as well. For kernels already using PTX directive .minnctapersm, this option will be ignored by default. Use CU_JIT_OVERRIDE_DIRECTIVE_VALUES to let this option take precedence over the PTX directive. Option type: unsigned int

Applies to: compiler only

CU_JIT_MAX_THREADS_PER_BLOCK = 32

Maximum number threads in a thread block, computed as the product of the maximum extent specifed for each dimension of the block. This limit is guaranteed not to be exeeded in any invocation of the kernel. Exceeding the the maximum number of threads results in runtime error or kernel launch failure. For kernels already using PTX directive .maxntid, this option will be ignored by default. Use CU_JIT_OVERRIDE_DIRECTIVE_VALUES to let this option take precedence over the PTX directive. Option type: int

Applies to: compiler only

CU_JIT_OVERRIDE_DIRECTIVE_VALUES = 33

This option lets the values specified using CU_JIT_MAX_REGISTERS, CU_JIT_THREADS_PER_BLOCK, CU_JIT_MAX_THREADS_PER_BLOCK and CU_JIT_MIN_CTA_PER_SM take precedence over any PTX directives. (0: Disable, default; 1: Enable) Option type: int

Applies to: compiler only

CU_JIT_NUM_OPTIONS = 34
class cuda.bindings.driver.CUjit_target(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Online compilation targets

CU_TARGET_COMPUTE_30 = 30

Compute device class 3.0

CU_TARGET_COMPUTE_32 = 32

Compute device class 3.2

CU_TARGET_COMPUTE_35 = 35

Compute device class 3.5

CU_TARGET_COMPUTE_37 = 37

Compute device class 3.7

CU_TARGET_COMPUTE_50 = 50

Compute device class 5.0

CU_TARGET_COMPUTE_52 = 52

Compute device class 5.2

CU_TARGET_COMPUTE_53 = 53

Compute device class 5.3

CU_TARGET_COMPUTE_60 = 60

Compute device class 6.0.

CU_TARGET_COMPUTE_61 = 61

Compute device class 6.1.

CU_TARGET_COMPUTE_62 = 62

Compute device class 6.2.

CU_TARGET_COMPUTE_70 = 70

Compute device class 7.0.

CU_TARGET_COMPUTE_72 = 72

Compute device class 7.2.

CU_TARGET_COMPUTE_75 = 75

Compute device class 7.5.

CU_TARGET_COMPUTE_80 = 80

Compute device class 8.0.

CU_TARGET_COMPUTE_86 = 86

Compute device class 8.6.

CU_TARGET_COMPUTE_87 = 87

Compute device class 8.7.

CU_TARGET_COMPUTE_89 = 89

Compute device class 8.9.

CU_TARGET_COMPUTE_90 = 90

Compute device class 9.0. Compute device class 9.0. with accelerated features.

CU_TARGET_COMPUTE_90A = 65626
class cuda.bindings.driver.CUjit_fallback(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Cubin matching fallback strategies

CU_PREFER_PTX = 0

Prefer to compile ptx if exact binary match not found

CU_PREFER_BINARY = 1

Prefer to fall back to compatible binary code if exact match not found

class cuda.bindings.driver.CUjit_cacheMode(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Caching modes for dlcm

CU_JIT_CACHE_OPTION_NONE = 0

Compile with no -dlcm flag specified

CU_JIT_CACHE_OPTION_CG = 1

Compile with L1 cache disabled

CU_JIT_CACHE_OPTION_CA = 2

Compile with L1 cache enabled

class cuda.bindings.driver.CUjitInputType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Device code formats

CU_JIT_INPUT_CUBIN = 0

Compiled device-class-specific device code

Applicable options: none

CU_JIT_INPUT_PTX = 1

PTX source code

Applicable options: PTX compiler options

CU_JIT_INPUT_FATBINARY = 2

Bundle of multiple cubins and/or PTX of some device code

Applicable options: PTX compiler options, CU_JIT_FALLBACK_STRATEGY

CU_JIT_INPUT_OBJECT = 3

Host object with embedded device code

Applicable options: PTX compiler options, CU_JIT_FALLBACK_STRATEGY

CU_JIT_INPUT_LIBRARY = 4

Archive of host objects with embedded device code

Applicable options: PTX compiler options, CU_JIT_FALLBACK_STRATEGY

CU_JIT_INPUT_NVVM = 5

[Deprecated]

Only valid with LTO-IR compiled with toolkits prior to CUDA 12.0

CU_JIT_NUM_INPUT_TYPES = 6
class cuda.bindings.driver.CUgraphicsRegisterFlags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags to register a graphics resource

CU_GRAPHICS_REGISTER_FLAGS_NONE = 0
CU_GRAPHICS_REGISTER_FLAGS_READ_ONLY = 1
CU_GRAPHICS_REGISTER_FLAGS_WRITE_DISCARD = 2
CU_GRAPHICS_REGISTER_FLAGS_SURFACE_LDST = 4
CU_GRAPHICS_REGISTER_FLAGS_TEXTURE_GATHER = 8
class cuda.bindings.driver.CUgraphicsMapResourceFlags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for mapping and unmapping interop resources

CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE = 0
CU_GRAPHICS_MAP_RESOURCE_FLAGS_READ_ONLY = 1
CU_GRAPHICS_MAP_RESOURCE_FLAGS_WRITE_DISCARD = 2
class cuda.bindings.driver.CUarray_cubemap_face(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Array indices for cube faces

CU_CUBEMAP_FACE_POSITIVE_X = 0

Positive X face of cubemap

CU_CUBEMAP_FACE_NEGATIVE_X = 1

Negative X face of cubemap

CU_CUBEMAP_FACE_POSITIVE_Y = 2

Positive Y face of cubemap

CU_CUBEMAP_FACE_NEGATIVE_Y = 3

Negative Y face of cubemap

CU_CUBEMAP_FACE_POSITIVE_Z = 4

Positive Z face of cubemap

CU_CUBEMAP_FACE_NEGATIVE_Z = 5

Negative Z face of cubemap

class cuda.bindings.driver.CUlimit(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Limits

CU_LIMIT_STACK_SIZE = 0

GPU thread stack size

CU_LIMIT_PRINTF_FIFO_SIZE = 1

GPU printf FIFO size

CU_LIMIT_MALLOC_HEAP_SIZE = 2

GPU malloc heap size

CU_LIMIT_DEV_RUNTIME_SYNC_DEPTH = 3

GPU device runtime launch synchronize depth

CU_LIMIT_DEV_RUNTIME_PENDING_LAUNCH_COUNT = 4

GPU device runtime pending launch count

CU_LIMIT_MAX_L2_FETCH_GRANULARITY = 5

A value between 0 and 128 that indicates the maximum fetch granularity of L2 (in Bytes). This is a hint

CU_LIMIT_PERSISTING_L2_CACHE_SIZE = 6

A size in bytes for L2 persisting lines cache size

CU_LIMIT_SHMEM_SIZE = 7

A maximum size in bytes of shared memory available to CUDA kernels on a CIG context. Can only be queried, cannot be set

CU_LIMIT_CIG_ENABLED = 8

A non-zero value indicates this CUDA context is a CIG-enabled context. Can only be queried, cannot be set

CU_LIMIT_CIG_SHMEM_FALLBACK_ENABLED = 9

When set to a non-zero value, CUDA will fail to launch a kernel on a CIG context, instead of using the fallback path, if the kernel uses more shared memory than available

CU_LIMIT_MAX = 10
class cuda.bindings.driver.CUresourcetype(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Resource types

CU_RESOURCE_TYPE_ARRAY = 0

Array resource

CU_RESOURCE_TYPE_MIPMAPPED_ARRAY = 1

Mipmapped array resource

CU_RESOURCE_TYPE_LINEAR = 2

Linear resource

CU_RESOURCE_TYPE_PITCH2D = 3

Pitch 2D resource

class cuda.bindings.driver.CUaccessProperty(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Specifies performance hint with CUaccessPolicyWindow for hitProp and missProp members.

CU_ACCESS_PROPERTY_NORMAL = 0

Normal cache persistence.

CU_ACCESS_PROPERTY_STREAMING = 1

Streaming access is less likely to persit from cache.

CU_ACCESS_PROPERTY_PERSISTING = 2

Persisting access is more likely to persist in cache.

class cuda.bindings.driver.CUgraphConditionalNodeType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Conditional node types

CU_GRAPH_COND_TYPE_IF = 0

Conditional ‘if’ Node. Body executed once if condition value is non-zero.

CU_GRAPH_COND_TYPE_WHILE = 1

Conditional ‘while’ Node. Body executed repeatedly while condition value is non-zero.

class cuda.bindings.driver.CUgraphNodeType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Graph node types

CU_GRAPH_NODE_TYPE_KERNEL = 0

GPU kernel node

CU_GRAPH_NODE_TYPE_MEMCPY = 1

Memcpy node

CU_GRAPH_NODE_TYPE_MEMSET = 2

Memset node

CU_GRAPH_NODE_TYPE_HOST = 3

Host (executable) node

CU_GRAPH_NODE_TYPE_GRAPH = 4

Node which executes an embedded graph

CU_GRAPH_NODE_TYPE_EMPTY = 5

Empty (no-op) node

CU_GRAPH_NODE_TYPE_WAIT_EVENT = 6

External event wait node

CU_GRAPH_NODE_TYPE_EVENT_RECORD = 7

External event record node

CU_GRAPH_NODE_TYPE_EXT_SEMAS_SIGNAL = 8

External semaphore signal node

CU_GRAPH_NODE_TYPE_EXT_SEMAS_WAIT = 9

External semaphore wait node

CU_GRAPH_NODE_TYPE_MEM_ALLOC = 10

Memory Allocation Node

CU_GRAPH_NODE_TYPE_MEM_FREE = 11

Memory Free Node

CU_GRAPH_NODE_TYPE_BATCH_MEM_OP = 12

Batch MemOp Node

CU_GRAPH_NODE_TYPE_CONDITIONAL = 13

Conditional Node May be used to implement a conditional execution path or loop

inside of a graph. The graph(s) contained within the body of the conditional node

can be selectively executed or iterated upon based on the value of a conditional

variable.

Handles must be created in advance of creating the node

using cuGraphConditionalHandleCreate.

The following restrictions apply to graphs which contain conditional nodes:

The graph cannot be used in a child node.

Only one instantiation of the graph may exist at any point in time.

The graph cannot be cloned.

To set the control value, supply a default value when creating the handle and/or

call cudaGraphSetConditional from device code.

class cuda.bindings.driver.CUgraphDependencyType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Type annotations that can be applied to graph edges as part of CUgraphEdgeData.

CU_GRAPH_DEPENDENCY_TYPE_DEFAULT = 0

This is an ordinary dependency.

CU_GRAPH_DEPENDENCY_TYPE_PROGRAMMATIC = 1

This dependency type allows the downstream node to use cudaGridDependencySynchronize(). It may only be used between kernel nodes, and must be used with either the CU_GRAPH_KERNEL_NODE_PORT_PROGRAMMATIC or CU_GRAPH_KERNEL_NODE_PORT_LAUNCH_ORDER outgoing port.

class cuda.bindings.driver.CUgraphInstantiateResult(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Graph instantiation results

CUDA_GRAPH_INSTANTIATE_SUCCESS = 0

Instantiation succeeded

CUDA_GRAPH_INSTANTIATE_ERROR = 1

Instantiation failed for an unexpected reason which is described in the return value of the function

CUDA_GRAPH_INSTANTIATE_INVALID_STRUCTURE = 2

Instantiation failed due to invalid structure, such as cycles

CUDA_GRAPH_INSTANTIATE_NODE_OPERATION_NOT_SUPPORTED = 3

Instantiation for device launch failed because the graph contained an unsupported operation

CUDA_GRAPH_INSTANTIATE_MULTIPLE_CTXS_NOT_SUPPORTED = 4

Instantiation for device launch failed due to the nodes belonging to different contexts

class cuda.bindings.driver.CUsynchronizationPolicy(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
CU_SYNC_POLICY_AUTO = 1
CU_SYNC_POLICY_SPIN = 2
CU_SYNC_POLICY_YIELD = 3
CU_SYNC_POLICY_BLOCKING_SYNC = 4
class cuda.bindings.driver.CUclusterSchedulingPolicy(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Cluster scheduling policies. These may be passed to cuFuncSetAttribute or cuKernelSetAttribute

CU_CLUSTER_SCHEDULING_POLICY_DEFAULT = 0

the default policy

CU_CLUSTER_SCHEDULING_POLICY_SPREAD = 1

spread the blocks within a cluster to the SMs

CU_CLUSTER_SCHEDULING_POLICY_LOAD_BALANCING = 2

allow the hardware to load-balance the blocks in a cluster to the SMs

class cuda.bindings.driver.CUlaunchMemSyncDomain(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Memory Synchronization Domain A kernel can be launched in a specified memory synchronization domain that affects all memory operations issued by that kernel. A memory barrier issued in one domain will only order memory operations in that domain, thus eliminating latency increase from memory barriers ordering unrelated traffic. By default, kernels are launched in domain 0. Kernel launched with CU_LAUNCH_MEM_SYNC_DOMAIN_REMOTE will have a different domain ID. User may also alter the domain ID with CUlaunchMemSyncDomainMap for a specific stream / graph node / kernel launch. See CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN, cuStreamSetAttribute, cuLaunchKernelEx, cuGraphKernelNodeSetAttribute. Memory operations done in kernels launched in different domains are considered system- scope distanced. In other words, a GPU scoped memory synchronization is not sufficient for memory order to be observed by kernels in another memory synchronization domain even if they are on the same GPU.

CU_LAUNCH_MEM_SYNC_DOMAIN_DEFAULT = 0

Launch kernels in the default domain

CU_LAUNCH_MEM_SYNC_DOMAIN_REMOTE = 1

Launch kernels in the remote domain

class cuda.bindings.driver.CUlaunchAttributeID(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Launch attributes enum; used as id field of CUlaunchAttribute

CU_LAUNCH_ATTRIBUTE_IGNORE = 0

Ignored entry, for convenient composition

CU_LAUNCH_ATTRIBUTE_ACCESS_POLICY_WINDOW = 1

Valid for streams, graph nodes, launches. See accessPolicyWindow.

CU_LAUNCH_ATTRIBUTE_COOPERATIVE = 2

Valid for graph nodes, launches. See cooperative.

CU_LAUNCH_ATTRIBUTE_SYNCHRONIZATION_POLICY = 3

Valid for streams. See syncPolicy.

CU_LAUNCH_ATTRIBUTE_CLUSTER_DIMENSION = 4

Valid for graph nodes, launches. See clusterDim.

CU_LAUNCH_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE = 5

Valid for graph nodes, launches. See clusterSchedulingPolicyPreference.

CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_STREAM_SERIALIZATION = 6

Valid for launches. Setting programmaticStreamSerializationAllowed to non-0 signals that the kernel will use programmatic means to resolve its stream dependency, so that the CUDA runtime should opportunistically allow the grid’s execution to overlap with the previous kernel in the stream, if that kernel requests the overlap. The dependent launches can choose to wait on the dependency using the programmatic sync (cudaGridDependencySynchronize() or equivalent PTX instructions).

CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_EVENT = 7

Valid for launches. Set programmaticEvent to record the event. Event recorded through this launch attribute is guaranteed to only trigger after all block in the associated kernel trigger the event. A block can trigger the event through PTX launchdep.release or CUDA builtin function cudaTriggerProgrammaticLaunchCompletion(). A trigger can also be inserted at the beginning of each block’s execution if triggerAtBlockStart is set to non-0. The dependent launches can choose to wait on the dependency using the programmatic sync (cudaGridDependencySynchronize() or equivalent PTX instructions). Note that dependents (including the CPU thread calling cuEventSynchronize()) are not guaranteed to observe the release precisely when it is released. For example, cuEventSynchronize() may only observe the event trigger long after the associated kernel has completed. This recording type is primarily meant for establishing programmatic dependency between device tasks. Note also this type of dependency allows, but does not guarantee, concurrent execution of tasks.

The event supplied must not be an interprocess or interop event. The event must disable timing (i.e. must be created with the CU_EVENT_DISABLE_TIMING flag set).

CU_LAUNCH_ATTRIBUTE_PRIORITY = 8

Valid for streams, graph nodes, launches. See priority.

CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP = 9

Valid for streams, graph nodes, launches. See memSyncDomainMap.

CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN = 10

Valid for streams, graph nodes, launches. See memSyncDomain.

CU_LAUNCH_ATTRIBUTE_LAUNCH_COMPLETION_EVENT = 12

Valid for launches. Set launchCompletionEvent to record the event.

Nominally, the event is triggered once all blocks of the kernel have begun execution. Currently this is a best effort. If a kernel B has a launch completion dependency on a kernel A, B may wait until A is complete. Alternatively, blocks of B may begin before all blocks of A have begun, for example if B can claim execution resources unavailable to A (e.g. they run on different GPUs) or if B is a higher priority than A. Exercise caution if such an ordering inversion could lead to deadlock.

A launch completion event is nominally similar to a programmatic event with triggerAtBlockStart set except that it is not visible to cudaGridDependencySynchronize() and can be used with compute capability less than 9.0.

The event supplied must not be an interprocess or interop event. The event must disable timing (i.e. must be created with the CU_EVENT_DISABLE_TIMING flag set).

CU_LAUNCH_ATTRIBUTE_DEVICE_UPDATABLE_KERNEL_NODE = 13

Valid for graph nodes, launches. This attribute is graphs-only, and passing it to a launch in a non-capturing stream will result in an error.

CUlaunchAttributeValue::deviceUpdatableKernelNode::deviceUpdatable can only be set to 0 or 1. Setting the field to 1 indicates that the corresponding kernel node should be device-updatable. On success, a handle will be returned via CUlaunchAttributeValue::deviceUpdatableKernelNode::devNode which can be passed to the various device-side update functions to update the node’s kernel parameters from within another kernel. For more information on the types of device updates that can be made, as well as the relevant limitations thereof, see cudaGraphKernelNodeUpdatesApply.

Nodes which are device-updatable have additional restrictions compared to regular kernel nodes. Firstly, device-updatable nodes cannot be removed from their graph via cuGraphDestroyNode. Additionally, once opted-in to this functionality, a node cannot opt out, and any attempt to set the deviceUpdatable attribute to 0 will result in an error. Device-updatable kernel nodes also cannot have their attributes copied to/from another kernel node via cuGraphKernelNodeCopyAttributes. Graphs containing one or more device-updatable nodes also do not allow multiple instantiation, and neither the graph nor its instantiated version can be passed to cuGraphExecUpdate.

If a graph contains device-updatable nodes and updates those nodes from the device from within the graph, the graph must be uploaded with cuGraphUpload before it is launched. For such a graph, if host-side executable graph updates are made to the device-updatable nodes, the graph must be uploaded before it is launched again.

CU_LAUNCH_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT = 14

Valid for launches. On devices where the L1 cache and shared memory use the same hardware resources, setting sharedMemCarveout to a percentage between 0-100 signals the CUDA driver to set the shared memory carveout preference, in percent of the total shared memory for that kernel launch. This attribute takes precedence over CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT. This is only a hint, and the CUDA driver can choose a different configuration if required for the launch.

class cuda.bindings.driver.CUstreamCaptureStatus(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Possible stream capture statuses returned by cuStreamIsCapturing

CU_STREAM_CAPTURE_STATUS_NONE = 0

Stream is not capturing

CU_STREAM_CAPTURE_STATUS_ACTIVE = 1

Stream is actively capturing

CU_STREAM_CAPTURE_STATUS_INVALIDATED = 2

Stream is part of a capture sequence that has been invalidated, but not terminated

class cuda.bindings.driver.CUstreamCaptureMode(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Possible modes for stream capture thread interactions. For more details see cuStreamBeginCapture and cuThreadExchangeStreamCaptureMode

CU_STREAM_CAPTURE_MODE_GLOBAL = 0
CU_STREAM_CAPTURE_MODE_THREAD_LOCAL = 1
CU_STREAM_CAPTURE_MODE_RELAXED = 2
class cuda.bindings.driver.CUdriverProcAddress_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags to specify search options. For more details see cuGetProcAddress

CU_GET_PROC_ADDRESS_DEFAULT = 0

Default search mode for driver symbols.

CU_GET_PROC_ADDRESS_LEGACY_STREAM = 1

Search for legacy versions of driver symbols.

CU_GET_PROC_ADDRESS_PER_THREAD_DEFAULT_STREAM = 2

Search for per-thread versions of driver symbols.

class cuda.bindings.driver.CUdriverProcAddressQueryResult(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags to indicate search status. For more details see cuGetProcAddress

CU_GET_PROC_ADDRESS_SUCCESS = 0

Symbol was succesfully found

CU_GET_PROC_ADDRESS_SYMBOL_NOT_FOUND = 1

Symbol was not found in search

CU_GET_PROC_ADDRESS_VERSION_NOT_SUFFICIENT = 2

Symbol was found but version supplied was not sufficient

class cuda.bindings.driver.CUexecAffinityType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Execution Affinity Types

CU_EXEC_AFFINITY_TYPE_SM_COUNT = 0

Create a context with limited SMs.

CU_EXEC_AFFINITY_TYPE_MAX = 1
class cuda.bindings.driver.CUcigDataType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
CIG_DATA_TYPE_D3D12_COMMAND_QUEUE = 1
class cuda.bindings.driver.CUlibraryOption(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Library options to be specified with cuLibraryLoadData() or cuLibraryLoadFromFile()

CU_LIBRARY_HOST_UNIVERSAL_FUNCTION_AND_DATA_TABLE = 0
CU_LIBRARY_BINARY_IS_PRESERVED = 1

Specifes that the argument code passed to cuLibraryLoadData() will be preserved. Specifying this option will let the driver know that code can be accessed at any point until cuLibraryUnload(). The default behavior is for the driver to allocate and maintain its own copy of code. Note that this is only a memory usage optimization hint and the driver can choose to ignore it if required. Specifying this option with cuLibraryLoadFromFile() is invalid and will return CUDA_ERROR_INVALID_VALUE.

CU_LIBRARY_NUM_OPTIONS = 2
class cuda.bindings.driver.CUresult(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Error codes

CUDA_SUCCESS = 0

The API call returned with no errors. In the case of query calls, this also means that the operation being queried is complete (see cuEventQuery() and cuStreamQuery()).

CUDA_ERROR_INVALID_VALUE = 1

This indicates that one or more of the parameters passed to the API call is not within an acceptable range of values.

CUDA_ERROR_OUT_OF_MEMORY = 2

The API call failed because it was unable to allocate enough memory or other resources to perform the requested operation.

CUDA_ERROR_NOT_INITIALIZED = 3

This indicates that the CUDA driver has not been initialized with cuInit() or that initialization has failed.

CUDA_ERROR_DEINITIALIZED = 4

This indicates that the CUDA driver is in the process of shutting down.

CUDA_ERROR_PROFILER_DISABLED = 5

This indicates profiler is not initialized for this run. This can happen when the application is running with external profiling tools like visual profiler.

CUDA_ERROR_PROFILER_NOT_INITIALIZED = 6

[Deprecated]

CUDA_ERROR_PROFILER_ALREADY_STARTED = 7

[Deprecated]

CUDA_ERROR_PROFILER_ALREADY_STOPPED = 8

[Deprecated]

CUDA_ERROR_STUB_LIBRARY = 34

This indicates that the CUDA driver that the application has loaded is a stub library. Applications that run with the stub rather than a real driver loaded will result in CUDA API returning this error.

CUDA_ERROR_DEVICE_UNAVAILABLE = 46

This indicates that requested CUDA device is unavailable at the current time. Devices are often unavailable due to use of CU_COMPUTEMODE_EXCLUSIVE_PROCESS or CU_COMPUTEMODE_PROHIBITED.

CUDA_ERROR_NO_DEVICE = 100

This indicates that no CUDA-capable devices were detected by the installed CUDA driver.

CUDA_ERROR_INVALID_DEVICE = 101

This indicates that the device ordinal supplied by the user does not correspond to a valid CUDA device or that the action requested is invalid for the specified device.

CUDA_ERROR_DEVICE_NOT_LICENSED = 102

This error indicates that the Grid license is not applied.

CUDA_ERROR_INVALID_IMAGE = 200

This indicates that the device kernel image is invalid. This can also indicate an invalid CUDA module.

CUDA_ERROR_INVALID_CONTEXT = 201

This most frequently indicates that there is no context bound to the current thread. This can also be returned if the context passed to an API call is not a valid handle (such as a context that has had cuCtxDestroy() invoked on it). This can also be returned if a user mixes different API versions (i.e. 3010 context with 3020 API calls). See cuCtxGetApiVersion() for more details. This can also be returned if the green context passed to an API call was not converted to a CUcontext using cuCtxFromGreenCtx API.

CUDA_ERROR_CONTEXT_ALREADY_CURRENT = 202

This indicated that the context being supplied as a parameter to the API call was already the active context. [Deprecated]

CUDA_ERROR_MAP_FAILED = 205

This indicates that a map or register operation has failed.

CUDA_ERROR_UNMAP_FAILED = 206

This indicates that an unmap or unregister operation has failed.

CUDA_ERROR_ARRAY_IS_MAPPED = 207

This indicates that the specified array is currently mapped and thus cannot be destroyed.

CUDA_ERROR_ALREADY_MAPPED = 208

This indicates that the resource is already mapped.

CUDA_ERROR_NO_BINARY_FOR_GPU = 209

This indicates that there is no kernel image available that is suitable for the device. This can occur when a user specifies code generation options for a particular CUDA source file that do not include the corresponding device configuration.

CUDA_ERROR_ALREADY_ACQUIRED = 210

This indicates that a resource has already been acquired.

CUDA_ERROR_NOT_MAPPED = 211

This indicates that a resource is not mapped.

CUDA_ERROR_NOT_MAPPED_AS_ARRAY = 212

This indicates that a mapped resource is not available for access as an array.

CUDA_ERROR_NOT_MAPPED_AS_POINTER = 213

This indicates that a mapped resource is not available for access as a pointer.

CUDA_ERROR_ECC_UNCORRECTABLE = 214

This indicates that an uncorrectable ECC error was detected during execution.

CUDA_ERROR_UNSUPPORTED_LIMIT = 215

This indicates that the CUlimit passed to the API call is not supported by the active device.

CUDA_ERROR_CONTEXT_ALREADY_IN_USE = 216

This indicates that the CUcontext passed to the API call can only be bound to a single CPU thread at a time but is already bound to a CPU thread.

CUDA_ERROR_PEER_ACCESS_UNSUPPORTED = 217

This indicates that peer access is not supported across the given devices.

CUDA_ERROR_INVALID_PTX = 218

This indicates that a PTX JIT compilation failed.

CUDA_ERROR_INVALID_GRAPHICS_CONTEXT = 219

This indicates an error with OpenGL or DirectX context.

This indicates that an uncorrectable NVLink error was detected during the execution.

CUDA_ERROR_JIT_COMPILER_NOT_FOUND = 221

This indicates that the PTX JIT compiler library was not found.

CUDA_ERROR_UNSUPPORTED_PTX_VERSION = 222

This indicates that the provided PTX was compiled with an unsupported toolchain.

CUDA_ERROR_JIT_COMPILATION_DISABLED = 223

This indicates that the PTX JIT compilation was disabled.

CUDA_ERROR_UNSUPPORTED_EXEC_AFFINITY = 224

This indicates that the CUexecAffinityType passed to the API call is not supported by the active device.

CUDA_ERROR_UNSUPPORTED_DEVSIDE_SYNC = 225

This indicates that the code to be compiled by the PTX JIT contains unsupported call to cudaDeviceSynchronize.

CUDA_ERROR_INVALID_SOURCE = 300

This indicates that the device kernel source is invalid. This includes compilation/linker errors encountered in device code or user error.

CUDA_ERROR_FILE_NOT_FOUND = 301

This indicates that the file specified was not found.

CUDA_ERROR_SHARED_OBJECT_SYMBOL_NOT_FOUND = 302

This indicates that a link to a shared object failed to resolve.

CUDA_ERROR_SHARED_OBJECT_INIT_FAILED = 303

This indicates that initialization of a shared object failed.

CUDA_ERROR_OPERATING_SYSTEM = 304

This indicates that an OS call failed.

CUDA_ERROR_INVALID_HANDLE = 400

This indicates that a resource handle passed to the API call was not valid. Resource handles are opaque types like CUstream and CUevent.

CUDA_ERROR_ILLEGAL_STATE = 401

This indicates that a resource required by the API call is not in a valid state to perform the requested operation.

CUDA_ERROR_LOSSY_QUERY = 402

This indicates an attempt was made to introspect an object in a way that would discard semantically important information. This is either due to the object using funtionality newer than the API version used to introspect it or omission of optional return arguments.

CUDA_ERROR_NOT_FOUND = 500

This indicates that a named symbol was not found. Examples of symbols are global/constant variable names, driver function names, texture names, and surface names.

CUDA_ERROR_NOT_READY = 600

This indicates that asynchronous operations issued previously have not completed yet. This result is not actually an error, but must be indicated differently than CUDA_SUCCESS (which indicates completion). Calls that may return this value include cuEventQuery() and cuStreamQuery().

CUDA_ERROR_ILLEGAL_ADDRESS = 700

While executing a kernel, the device encountered a load or store instruction on an invalid memory address. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES = 701

This indicates that a launch did not occur because it did not have appropriate resources. This error usually indicates that the user has attempted to pass too many arguments to the device kernel, or the kernel launch specifies too many threads for the kernel’s register count. Passing arguments of the wrong size (i.e. a 64-bit pointer when a 32-bit int is expected) is equivalent to passing too many arguments and can also result in this error.

CUDA_ERROR_LAUNCH_TIMEOUT = 702

This indicates that the device kernel took too long to execute. This can only occur if timeouts are enabled - see the device attribute CU_DEVICE_ATTRIBUTE_KERNEL_EXEC_TIMEOUT for more information. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

CUDA_ERROR_LAUNCH_INCOMPATIBLE_TEXTURING = 703

This error indicates a kernel launch that uses an incompatible texturing mode.

CUDA_ERROR_PEER_ACCESS_ALREADY_ENABLED = 704

This error indicates that a call to cuCtxEnablePeerAccess() is trying to re-enable peer access to a context which has already had peer access to it enabled.

CUDA_ERROR_PEER_ACCESS_NOT_ENABLED = 705

This error indicates that cuCtxDisablePeerAccess() is trying to disable peer access which has not been enabled yet via cuCtxEnablePeerAccess().

CUDA_ERROR_PRIMARY_CONTEXT_ACTIVE = 708

This error indicates that the primary context for the specified device has already been initialized.

CUDA_ERROR_CONTEXT_IS_DESTROYED = 709

This error indicates that the context current to the calling thread has been destroyed using cuCtxDestroy, or is a primary context which has not yet been initialized.

CUDA_ERROR_ASSERT = 710

A device-side assert triggered during kernel execution. The context cannot be used anymore, and must be destroyed. All existing device memory allocations from this context are invalid and must be reconstructed if the program is to continue using CUDA.

CUDA_ERROR_TOO_MANY_PEERS = 711

This error indicates that the hardware resources required to enable peer access have been exhausted for one or more of the devices passed to cuCtxEnablePeerAccess().

CUDA_ERROR_HOST_MEMORY_ALREADY_REGISTERED = 712

This error indicates that the memory range passed to cuMemHostRegister() has already been registered.

CUDA_ERROR_HOST_MEMORY_NOT_REGISTERED = 713

This error indicates that the pointer passed to cuMemHostUnregister() does not correspond to any currently registered memory region.

CUDA_ERROR_HARDWARE_STACK_ERROR = 714

While executing a kernel, the device encountered a stack error. This can be due to stack corruption or exceeding the stack size limit. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

CUDA_ERROR_ILLEGAL_INSTRUCTION = 715

While executing a kernel, the device encountered an illegal instruction. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

CUDA_ERROR_MISALIGNED_ADDRESS = 716

While executing a kernel, the device encountered a load or store instruction on a memory address which is not aligned. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

CUDA_ERROR_INVALID_ADDRESS_SPACE = 717

While executing a kernel, the device encountered an instruction which can only operate on memory locations in certain address spaces (global, shared, or local), but was supplied a memory address not belonging to an allowed address space. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

CUDA_ERROR_INVALID_PC = 718

While executing a kernel, the device program counter wrapped its address space. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

CUDA_ERROR_LAUNCH_FAILED = 719

An exception occurred on the device while executing a kernel. Common causes include dereferencing an invalid device pointer and accessing out of bounds shared memory. Less common cases can be system specific - more information about these cases can be found in the system specific user guide. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

CUDA_ERROR_COOPERATIVE_LAUNCH_TOO_LARGE = 720

This error indicates that the number of blocks launched per grid for a kernel that was launched via either cuLaunchCooperativeKernel or cuLaunchCooperativeKernelMultiDevice exceeds the maximum number of blocks as allowed by cuOccupancyMaxActiveBlocksPerMultiprocessor or cuOccupancyMaxActiveBlocksPerMultiprocessorWithFlags times the number of multiprocessors as specified by the device attribute CU_DEVICE_ATTRIBUTE_MULTIPROCESSOR_COUNT.

CUDA_ERROR_NOT_PERMITTED = 800

This error indicates that the attempted operation is not permitted.

CUDA_ERROR_NOT_SUPPORTED = 801

This error indicates that the attempted operation is not supported on the current system or device.

CUDA_ERROR_SYSTEM_NOT_READY = 802

This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide.

CUDA_ERROR_SYSTEM_DRIVER_MISMATCH = 803

This error indicates that there is a mismatch between the versions of the display driver and the CUDA driver. Refer to the compatibility documentation for supported versions.

CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE = 804

This error indicates that the system was upgraded to run with forward compatibility but the visible hardware detected by CUDA does not support this configuration. Refer to the compatibility documentation for the supported hardware matrix or ensure that only supported hardware is visible during initialization via the CUDA_VISIBLE_DEVICES environment variable.

CUDA_ERROR_MPS_CONNECTION_FAILED = 805

This error indicates that the MPS client failed to connect to the MPS control daemon or the MPS server.

CUDA_ERROR_MPS_RPC_FAILURE = 806

This error indicates that the remote procedural call between the MPS server and the MPS client failed.

CUDA_ERROR_MPS_SERVER_NOT_READY = 807

This error indicates that the MPS server is not ready to accept new MPS client requests. This error can be returned when the MPS server is in the process of recovering from a fatal failure.

CUDA_ERROR_MPS_MAX_CLIENTS_REACHED = 808

This error indicates that the hardware resources required to create MPS client have been exhausted.

CUDA_ERROR_MPS_MAX_CONNECTIONS_REACHED = 809

This error indicates the the hardware resources required to support device connections have been exhausted.

CUDA_ERROR_MPS_CLIENT_TERMINATED = 810

This error indicates that the MPS client has been terminated by the server. To continue using CUDA, the process must be terminated and relaunched.

CUDA_ERROR_CDP_NOT_SUPPORTED = 811

This error indicates that the module is using CUDA Dynamic Parallelism, but the current configuration, like MPS, does not support it.

CUDA_ERROR_CDP_VERSION_MISMATCH = 812

This error indicates that a module contains an unsupported interaction between different versions of CUDA Dynamic Parallelism.

CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED = 900

This error indicates that the operation is not permitted when the stream is capturing.

CUDA_ERROR_STREAM_CAPTURE_INVALIDATED = 901

This error indicates that the current capture sequence on the stream has been invalidated due to a previous error.

CUDA_ERROR_STREAM_CAPTURE_MERGE = 902

This error indicates that the operation would have resulted in a merge of two independent capture sequences.

CUDA_ERROR_STREAM_CAPTURE_UNMATCHED = 903

This error indicates that the capture was not initiated in this stream.

CUDA_ERROR_STREAM_CAPTURE_UNJOINED = 904

This error indicates that the capture sequence contains a fork that was not joined to the primary stream.

CUDA_ERROR_STREAM_CAPTURE_ISOLATION = 905

This error indicates that a dependency would have been created which crosses the capture sequence boundary. Only implicit in-stream ordering dependencies are allowed to cross the boundary.

CUDA_ERROR_STREAM_CAPTURE_IMPLICIT = 906

This error indicates a disallowed implicit dependency on a current capture sequence from cudaStreamLegacy.

CUDA_ERROR_CAPTURED_EVENT = 907

This error indicates that the operation is not permitted on an event which was last recorded in a capturing stream.

CUDA_ERROR_STREAM_CAPTURE_WRONG_THREAD = 908

A stream capture sequence not initiated with the CU_STREAM_CAPTURE_MODE_RELAXED argument to cuStreamBeginCapture was passed to cuStreamEndCapture in a different thread.

CUDA_ERROR_TIMEOUT = 909

This error indicates that the timeout specified for the wait operation has lapsed.

CUDA_ERROR_GRAPH_EXEC_UPDATE_FAILURE = 910

This error indicates that the graph update was not performed because it included changes which violated constraints specific to instantiated graph update.

CUDA_ERROR_EXTERNAL_DEVICE = 911

This indicates that an async error has occurred in a device outside of CUDA. If CUDA was waiting for an external device’s signal before consuming shared data, the external device signaled an error indicating that the data is not valid for consumption. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

CUDA_ERROR_INVALID_CLUSTER_SIZE = 912

Indicates a kernel launch error due to cluster misconfiguration.

CUDA_ERROR_FUNCTION_NOT_LOADED = 913

Indiciates a function handle is not loaded when calling an API that requires a loaded function.

CUDA_ERROR_INVALID_RESOURCE_TYPE = 914

This error indicates one or more resources passed in are not valid resource types for the operation.

CUDA_ERROR_INVALID_RESOURCE_CONFIGURATION = 915

This error indicates one or more resources are insufficient or non-applicable for the operation.

CUDA_ERROR_UNKNOWN = 999

This indicates that an unknown internal error has occurred.

class cuda.bindings.driver.CUdevice_P2PAttribute(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

P2P Attributes

CU_DEVICE_P2P_ATTRIBUTE_PERFORMANCE_RANK = 1

A relative value indicating the performance of the link between two devices

CU_DEVICE_P2P_ATTRIBUTE_ACCESS_SUPPORTED = 2

P2P Access is enable

CU_DEVICE_P2P_ATTRIBUTE_NATIVE_ATOMIC_SUPPORTED = 3

Atomic operation over the link supported

CU_DEVICE_P2P_ATTRIBUTE_ACCESS_ACCESS_SUPPORTED = 4

[Deprecated]

CU_DEVICE_P2P_ATTRIBUTE_CUDA_ARRAY_ACCESS_SUPPORTED = 4

Accessing CUDA arrays over the link supported

class cuda.bindings.driver.CUresourceViewFormat(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Resource view format

CU_RES_VIEW_FORMAT_NONE = 0

No resource view format (use underlying resource format)

CU_RES_VIEW_FORMAT_UINT_1X8 = 1

1 channel unsigned 8-bit integers

CU_RES_VIEW_FORMAT_UINT_2X8 = 2

2 channel unsigned 8-bit integers

CU_RES_VIEW_FORMAT_UINT_4X8 = 3

4 channel unsigned 8-bit integers

CU_RES_VIEW_FORMAT_SINT_1X8 = 4

1 channel signed 8-bit integers

CU_RES_VIEW_FORMAT_SINT_2X8 = 5

2 channel signed 8-bit integers

CU_RES_VIEW_FORMAT_SINT_4X8 = 6

4 channel signed 8-bit integers

CU_RES_VIEW_FORMAT_UINT_1X16 = 7

1 channel unsigned 16-bit integers

CU_RES_VIEW_FORMAT_UINT_2X16 = 8

2 channel unsigned 16-bit integers

CU_RES_VIEW_FORMAT_UINT_4X16 = 9

4 channel unsigned 16-bit integers

CU_RES_VIEW_FORMAT_SINT_1X16 = 10

1 channel signed 16-bit integers

CU_RES_VIEW_FORMAT_SINT_2X16 = 11

2 channel signed 16-bit integers

CU_RES_VIEW_FORMAT_SINT_4X16 = 12

4 channel signed 16-bit integers

CU_RES_VIEW_FORMAT_UINT_1X32 = 13

1 channel unsigned 32-bit integers

CU_RES_VIEW_FORMAT_UINT_2X32 = 14

2 channel unsigned 32-bit integers

CU_RES_VIEW_FORMAT_UINT_4X32 = 15

4 channel unsigned 32-bit integers

CU_RES_VIEW_FORMAT_SINT_1X32 = 16

1 channel signed 32-bit integers

CU_RES_VIEW_FORMAT_SINT_2X32 = 17

2 channel signed 32-bit integers

CU_RES_VIEW_FORMAT_SINT_4X32 = 18

4 channel signed 32-bit integers

CU_RES_VIEW_FORMAT_FLOAT_1X16 = 19

1 channel 16-bit floating point

CU_RES_VIEW_FORMAT_FLOAT_2X16 = 20

2 channel 16-bit floating point

CU_RES_VIEW_FORMAT_FLOAT_4X16 = 21

4 channel 16-bit floating point

CU_RES_VIEW_FORMAT_FLOAT_1X32 = 22

1 channel 32-bit floating point

CU_RES_VIEW_FORMAT_FLOAT_2X32 = 23

2 channel 32-bit floating point

CU_RES_VIEW_FORMAT_FLOAT_4X32 = 24

4 channel 32-bit floating point

CU_RES_VIEW_FORMAT_UNSIGNED_BC1 = 25

Block compressed 1

CU_RES_VIEW_FORMAT_UNSIGNED_BC2 = 26

Block compressed 2

CU_RES_VIEW_FORMAT_UNSIGNED_BC3 = 27

Block compressed 3

CU_RES_VIEW_FORMAT_UNSIGNED_BC4 = 28

Block compressed 4 unsigned

CU_RES_VIEW_FORMAT_SIGNED_BC4 = 29

Block compressed 4 signed

CU_RES_VIEW_FORMAT_UNSIGNED_BC5 = 30

Block compressed 5 unsigned

CU_RES_VIEW_FORMAT_SIGNED_BC5 = 31

Block compressed 5 signed

CU_RES_VIEW_FORMAT_UNSIGNED_BC6H = 32

Block compressed 6 unsigned half-float

CU_RES_VIEW_FORMAT_SIGNED_BC6H = 33

Block compressed 6 signed half-float

CU_RES_VIEW_FORMAT_UNSIGNED_BC7 = 34

Block compressed 7

class cuda.bindings.driver.CUtensorMapDataType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Tensor map data type

CU_TENSOR_MAP_DATA_TYPE_UINT8 = 0
CU_TENSOR_MAP_DATA_TYPE_UINT16 = 1
CU_TENSOR_MAP_DATA_TYPE_UINT32 = 2
CU_TENSOR_MAP_DATA_TYPE_INT32 = 3
CU_TENSOR_MAP_DATA_TYPE_UINT64 = 4
CU_TENSOR_MAP_DATA_TYPE_INT64 = 5
CU_TENSOR_MAP_DATA_TYPE_FLOAT16 = 6
CU_TENSOR_MAP_DATA_TYPE_FLOAT32 = 7
CU_TENSOR_MAP_DATA_TYPE_FLOAT64 = 8
CU_TENSOR_MAP_DATA_TYPE_BFLOAT16 = 9
CU_TENSOR_MAP_DATA_TYPE_FLOAT32_FTZ = 10
CU_TENSOR_MAP_DATA_TYPE_TFLOAT32 = 11
CU_TENSOR_MAP_DATA_TYPE_TFLOAT32_FTZ = 12
class cuda.bindings.driver.CUtensorMapInterleave(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Tensor map interleave layout type

CU_TENSOR_MAP_INTERLEAVE_NONE = 0
CU_TENSOR_MAP_INTERLEAVE_16B = 1
CU_TENSOR_MAP_INTERLEAVE_32B = 2
class cuda.bindings.driver.CUtensorMapSwizzle(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Tensor map swizzling mode of shared memory banks

CU_TENSOR_MAP_SWIZZLE_NONE = 0
CU_TENSOR_MAP_SWIZZLE_32B = 1
CU_TENSOR_MAP_SWIZZLE_64B = 2
CU_TENSOR_MAP_SWIZZLE_128B = 3
class cuda.bindings.driver.CUtensorMapL2promotion(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Tensor map L2 promotion type

CU_TENSOR_MAP_L2_PROMOTION_NONE = 0
CU_TENSOR_MAP_L2_PROMOTION_L2_64B = 1
CU_TENSOR_MAP_L2_PROMOTION_L2_128B = 2
CU_TENSOR_MAP_L2_PROMOTION_L2_256B = 3
class cuda.bindings.driver.CUtensorMapFloatOOBfill(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Tensor map out-of-bounds fill type

CU_TENSOR_MAP_FLOAT_OOB_FILL_NONE = 0
CU_TENSOR_MAP_FLOAT_OOB_FILL_NAN_REQUEST_ZERO_FMA = 1
class cuda.bindings.driver.CUDA_POINTER_ATTRIBUTE_ACCESS_FLAGS(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Access flags that specify the level of access the current context’s device has on the memory referenced.

CU_POINTER_ATTRIBUTE_ACCESS_FLAG_NONE = 0

No access, meaning the device cannot access this memory at all, thus must be staged through accessible memory in order to complete certain operations

CU_POINTER_ATTRIBUTE_ACCESS_FLAG_READ = 1

Read-only access, meaning writes to this memory are considered invalid accesses and thus return error in that case.

CU_POINTER_ATTRIBUTE_ACCESS_FLAG_READWRITE = 3

Read-write access, the device has full read-write access to the memory

class cuda.bindings.driver.CUexternalMemoryHandleType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

External memory handle types

CU_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD = 1

Handle is an opaque file descriptor

CU_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32 = 2

Handle is an opaque shared NT handle

CU_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_KMT = 3

Handle is an opaque, globally shared handle

CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_HEAP = 4

Handle is a D3D12 heap object

CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_RESOURCE = 5

Handle is a D3D12 committed resource

CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_RESOURCE = 6

Handle is a shared NT handle to a D3D11 resource

CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_RESOURCE_KMT = 7

Handle is a globally shared handle to a D3D11 resource

CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF = 8

Handle is an NvSciBuf object

class cuda.bindings.driver.CUexternalSemaphoreHandleType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

External semaphore handle types

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_FD = 1

Handle is an opaque file descriptor

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32 = 2

Handle is an opaque shared NT handle

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32_KMT = 3

Handle is an opaque, globally shared handle

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D12_FENCE = 4

Handle is a shared NT handle referencing a D3D12 fence object

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_FENCE = 5

Handle is a shared NT handle referencing a D3D11 fence object

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC = 6

Opaque handle to NvSciSync Object

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_KEYED_MUTEX = 7

Handle is a shared NT handle referencing a D3D11 keyed mutex object

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_KEYED_MUTEX_KMT = 8

Handle is a globally shared handle referencing a D3D11 keyed mutex object

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_TIMELINE_SEMAPHORE_FD = 9

Handle is an opaque file descriptor referencing a timeline semaphore

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_TIMELINE_SEMAPHORE_WIN32 = 10

Handle is an opaque shared NT handle referencing a timeline semaphore

class cuda.bindings.driver.CUmemAllocationHandleType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for specifying particular handle types

CU_MEM_HANDLE_TYPE_NONE = 0

Does not allow any export mechanism. >

CU_MEM_HANDLE_TYPE_POSIX_FILE_DESCRIPTOR = 1

Allows a file descriptor to be used for exporting. Permitted only on POSIX systems. (int)

CU_MEM_HANDLE_TYPE_WIN32 = 2

Allows a Win32 NT handle to be used for exporting. (HANDLE)

CU_MEM_HANDLE_TYPE_WIN32_KMT = 4

Allows a Win32 KMT handle to be used for exporting. (D3DKMT_HANDLE)

CU_MEM_HANDLE_TYPE_FABRIC = 8

Allows a fabric handle to be used for exporting. (CUmemFabricHandle)

CU_MEM_HANDLE_TYPE_MAX = 2147483647
class cuda.bindings.driver.CUmemAccess_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Specifies the memory protection flags for mapping.

CU_MEM_ACCESS_FLAGS_PROT_NONE = 0

Default, make the address range not accessible

CU_MEM_ACCESS_FLAGS_PROT_READ = 1

Make the address range read accessible

CU_MEM_ACCESS_FLAGS_PROT_READWRITE = 3

Make the address range read-write accessible

CU_MEM_ACCESS_FLAGS_PROT_MAX = 2147483647
class cuda.bindings.driver.CUmemLocationType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Specifies the type of location

CU_MEM_LOCATION_TYPE_INVALID = 0
CU_MEM_LOCATION_TYPE_DEVICE = 1

Location is a device location, thus id is a device ordinal

CU_MEM_LOCATION_TYPE_HOST = 2

Location is host, id is ignored

CU_MEM_LOCATION_TYPE_HOST_NUMA = 3

Location is a host NUMA node, thus id is a host NUMA node id

CU_MEM_LOCATION_TYPE_HOST_NUMA_CURRENT = 4

Location is a host NUMA node of the current thread, id is ignored

CU_MEM_LOCATION_TYPE_MAX = 2147483647
class cuda.bindings.driver.CUmemAllocationType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Defines the allocation types available

CU_MEM_ALLOCATION_TYPE_INVALID = 0
CU_MEM_ALLOCATION_TYPE_PINNED = 1

This allocation type is ‘pinned’, i.e. cannot migrate from its current location while the application is actively using it

CU_MEM_ALLOCATION_TYPE_MAX = 2147483647
class cuda.bindings.driver.CUmemAllocationGranularity_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flag for requesting different optimal and required granularities for an allocation.

CU_MEM_ALLOC_GRANULARITY_MINIMUM = 0

Minimum required granularity for allocation

Recommended granularity for allocation for best performance

class cuda.bindings.driver.CUmemRangeHandleType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Specifies the handle type for address range

CU_MEM_RANGE_HANDLE_TYPE_DMA_BUF_FD = 1
CU_MEM_RANGE_HANDLE_TYPE_MAX = 2147483647
class cuda.bindings.driver.CUarraySparseSubresourceType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Sparse subresource types

CU_ARRAY_SPARSE_SUBRESOURCE_TYPE_SPARSE_LEVEL = 0
CU_ARRAY_SPARSE_SUBRESOURCE_TYPE_MIPTAIL = 1
class cuda.bindings.driver.CUmemOperationType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Memory operation types

CU_MEM_OPERATION_TYPE_MAP = 1
CU_MEM_OPERATION_TYPE_UNMAP = 2
class cuda.bindings.driver.CUmemHandleType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Memory handle types

CU_MEM_HANDLE_TYPE_GENERIC = 0
class cuda.bindings.driver.CUmemAllocationCompType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Specifies compression attribute for an allocation.

CU_MEM_ALLOCATION_COMP_NONE = 0

Allocating non-compressible memory

CU_MEM_ALLOCATION_COMP_GENERIC = 1

Allocating compressible memory

class cuda.bindings.driver.CUmulticastGranularity_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for querying different granularities for a multicast object

CU_MULTICAST_GRANULARITY_MINIMUM = 0

Minimum required granularity

Recommended granularity for best performance

class cuda.bindings.driver.CUgraphExecUpdateResult(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

CUDA Graph Update error types

CU_GRAPH_EXEC_UPDATE_SUCCESS = 0

The update succeeded

CU_GRAPH_EXEC_UPDATE_ERROR = 1

The update failed for an unexpected reason which is described in the return value of the function

CU_GRAPH_EXEC_UPDATE_ERROR_TOPOLOGY_CHANGED = 2

The update failed because the topology changed

CU_GRAPH_EXEC_UPDATE_ERROR_NODE_TYPE_CHANGED = 3

The update failed because a node type changed

CU_GRAPH_EXEC_UPDATE_ERROR_FUNCTION_CHANGED = 4

The update failed because the function of a kernel node changed (CUDA driver < 11.2)

CU_GRAPH_EXEC_UPDATE_ERROR_PARAMETERS_CHANGED = 5

The update failed because the parameters changed in a way that is not supported

CU_GRAPH_EXEC_UPDATE_ERROR_NOT_SUPPORTED = 6

The update failed because something about the node is not supported

CU_GRAPH_EXEC_UPDATE_ERROR_UNSUPPORTED_FUNCTION_CHANGE = 7

The update failed because the function of a kernel node changed in an unsupported way

CU_GRAPH_EXEC_UPDATE_ERROR_ATTRIBUTES_CHANGED = 8

The update failed because the node attributes changed in a way that is not supported

class cuda.bindings.driver.CUmemPool_attribute(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

CUDA memory pool attributes

CU_MEMPOOL_ATTR_REUSE_FOLLOW_EVENT_DEPENDENCIES = 1

(value type = int) Allow cuMemAllocAsync to use memory asynchronously freed in another streams as long as a stream ordering dependency of the allocating stream on the free action exists. Cuda events and null stream interactions can create the required stream ordered dependencies. (default enabled)

CU_MEMPOOL_ATTR_REUSE_ALLOW_OPPORTUNISTIC = 2

(value type = int) Allow reuse of already completed frees when there is no dependency between the free and allocation. (default enabled)

CU_MEMPOOL_ATTR_REUSE_ALLOW_INTERNAL_DEPENDENCIES = 3

(value type = int) Allow cuMemAllocAsync to insert new stream dependencies in order to establish the stream ordering required to reuse a piece of memory released by cuFreeAsync (default enabled).

CU_MEMPOOL_ATTR_RELEASE_THRESHOLD = 4

(value type = cuuint64_t) Amount of reserved memory in bytes to hold onto before trying to release memory back to the OS. When more than the release threshold bytes of memory are held by the memory pool, the allocator will try to release memory back to the OS on the next call to stream, event or context synchronize. (default 0)

CU_MEMPOOL_ATTR_RESERVED_MEM_CURRENT = 5

(value type = cuuint64_t) Amount of backing memory currently allocated for the mempool.

CU_MEMPOOL_ATTR_RESERVED_MEM_HIGH = 6

(value type = cuuint64_t) High watermark of backing memory allocated for the mempool since the last time it was reset. High watermark can only be reset to zero.

CU_MEMPOOL_ATTR_USED_MEM_CURRENT = 7

(value type = cuuint64_t) Amount of memory from the pool that is currently in use by the application.

CU_MEMPOOL_ATTR_USED_MEM_HIGH = 8

(value type = cuuint64_t) High watermark of the amount of memory from the pool that was in use by the application since the last time it was reset. High watermark can only be reset to zero.

class cuda.bindings.driver.CUgraphMem_attribute(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
CU_GRAPH_MEM_ATTR_USED_MEM_CURRENT = 0

(value type = cuuint64_t) Amount of memory, in bytes, currently associated with graphs

CU_GRAPH_MEM_ATTR_USED_MEM_HIGH = 1

(value type = cuuint64_t) High watermark of memory, in bytes, associated with graphs since the last time it was reset. High watermark can only be reset to zero.

CU_GRAPH_MEM_ATTR_RESERVED_MEM_CURRENT = 2

(value type = cuuint64_t) Amount of memory, in bytes, currently allocated for use by the CUDA graphs asynchronous allocator.

CU_GRAPH_MEM_ATTR_RESERVED_MEM_HIGH = 3

(value type = cuuint64_t) High watermark of memory, in bytes, currently allocated for use by the CUDA graphs asynchronous allocator.

class cuda.bindings.driver.CUflushGPUDirectRDMAWritesOptions(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Bitmasks for CU_DEVICE_ATTRIBUTE_GPU_DIRECT_RDMA_FLUSH_WRITES_OPTIONS

CU_FLUSH_GPU_DIRECT_RDMA_WRITES_OPTION_HOST = 1

cuFlushGPUDirectRDMAWrites() and its CUDA Runtime API counterpart are supported on the device.

CU_FLUSH_GPU_DIRECT_RDMA_WRITES_OPTION_MEMOPS = 2

The CU_STREAM_WAIT_VALUE_FLUSH flag and the CU_STREAM_MEM_OP_FLUSH_REMOTE_WRITES MemOp are supported on the device.

class cuda.bindings.driver.CUGPUDirectRDMAWritesOrdering(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Platform native ordering for GPUDirect RDMA writes

CU_GPU_DIRECT_RDMA_WRITES_ORDERING_NONE = 0

The device does not natively support ordering of remote writes. cuFlushGPUDirectRDMAWrites() can be leveraged if supported.

CU_GPU_DIRECT_RDMA_WRITES_ORDERING_OWNER = 100

Natively, the device can consistently consume remote writes, although other CUDA devices may not.

CU_GPU_DIRECT_RDMA_WRITES_ORDERING_ALL_DEVICES = 200

Any CUDA device in the system can consistently consume remote writes to this device.

class cuda.bindings.driver.CUflushGPUDirectRDMAWritesScope(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

The scopes for cuFlushGPUDirectRDMAWrites

CU_FLUSH_GPU_DIRECT_RDMA_WRITES_TO_OWNER = 100

Blocks until remote writes are visible to the CUDA device context owning the data.

CU_FLUSH_GPU_DIRECT_RDMA_WRITES_TO_ALL_DEVICES = 200

Blocks until remote writes are visible to all CUDA device contexts.

class cuda.bindings.driver.CUflushGPUDirectRDMAWritesTarget(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

The targets for cuFlushGPUDirectRDMAWrites

CU_FLUSH_GPU_DIRECT_RDMA_WRITES_TARGET_CURRENT_CTX = 0

Sets the target for cuFlushGPUDirectRDMAWrites() to the currently active CUDA device context.

class cuda.bindings.driver.CUgraphDebugDot_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

The additional write options for cuGraphDebugDotPrint

CU_GRAPH_DEBUG_DOT_FLAGS_VERBOSE = 1

Output all debug data as if every debug flag is enabled

CU_GRAPH_DEBUG_DOT_FLAGS_RUNTIME_TYPES = 2

Use CUDA Runtime structures for output

CU_GRAPH_DEBUG_DOT_FLAGS_KERNEL_NODE_PARAMS = 4

Adds CUDA_KERNEL_NODE_PARAMS values to output

CU_GRAPH_DEBUG_DOT_FLAGS_MEMCPY_NODE_PARAMS = 8

Adds CUDA_MEMCPY3D values to output

CU_GRAPH_DEBUG_DOT_FLAGS_MEMSET_NODE_PARAMS = 16

Adds CUDA_MEMSET_NODE_PARAMS values to output

CU_GRAPH_DEBUG_DOT_FLAGS_HOST_NODE_PARAMS = 32

Adds CUDA_HOST_NODE_PARAMS values to output

CU_GRAPH_DEBUG_DOT_FLAGS_EVENT_NODE_PARAMS = 64

Adds CUevent handle from record and wait nodes to output

CU_GRAPH_DEBUG_DOT_FLAGS_EXT_SEMAS_SIGNAL_NODE_PARAMS = 128

Adds CUDA_EXT_SEM_SIGNAL_NODE_PARAMS values to output

CU_GRAPH_DEBUG_DOT_FLAGS_EXT_SEMAS_WAIT_NODE_PARAMS = 256

Adds CUDA_EXT_SEM_WAIT_NODE_PARAMS values to output

CU_GRAPH_DEBUG_DOT_FLAGS_KERNEL_NODE_ATTRIBUTES = 512

Adds CUkernelNodeAttrValue values to output

CU_GRAPH_DEBUG_DOT_FLAGS_HANDLES = 1024

Adds node handles and every kernel function handle to output

CU_GRAPH_DEBUG_DOT_FLAGS_MEM_ALLOC_NODE_PARAMS = 2048

Adds memory alloc node parameters to output

CU_GRAPH_DEBUG_DOT_FLAGS_MEM_FREE_NODE_PARAMS = 4096

Adds memory free node parameters to output

CU_GRAPH_DEBUG_DOT_FLAGS_BATCH_MEM_OP_NODE_PARAMS = 8192

Adds batch mem op node parameters to output

CU_GRAPH_DEBUG_DOT_FLAGS_EXTRA_TOPO_INFO = 16384

Adds edge numbering information

CU_GRAPH_DEBUG_DOT_FLAGS_CONDITIONAL_NODE_PARAMS = 32768

Adds conditional node parameters to output

class cuda.bindings.driver.CUuserObject_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for user objects for graphs

CU_USER_OBJECT_NO_DESTRUCTOR_SYNC = 1

Indicates the destructor execution is not synchronized by any CUDA handle.

class cuda.bindings.driver.CUuserObjectRetain_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for retaining user object references for graphs

CU_GRAPH_USER_OBJECT_MOVE = 1

Transfer references from the caller rather than creating new references.

class cuda.bindings.driver.CUgraphInstantiate_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for instantiating a graph

CUDA_GRAPH_INSTANTIATE_FLAG_AUTO_FREE_ON_LAUNCH = 1

Automatically free memory allocated in a graph before relaunching.

CUDA_GRAPH_INSTANTIATE_FLAG_UPLOAD = 2

Automatically upload the graph after instantiation. Only supported by cuGraphInstantiateWithParams. The upload will be performed using the stream provided in instantiateParams.

CUDA_GRAPH_INSTANTIATE_FLAG_DEVICE_LAUNCH = 4

Instantiate the graph to be launchable from the device. This flag can only be used on platforms which support unified addressing. This flag cannot be used in conjunction with CUDA_GRAPH_INSTANTIATE_FLAG_AUTO_FREE_ON_LAUNCH.

CUDA_GRAPH_INSTANTIATE_FLAG_USE_NODE_PRIORITY = 8

Run the graph using the per-node priority attributes rather than the priority of the stream it is launched into.

class cuda.bindings.driver.CUdeviceNumaConfig(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

CUDA device NUMA configuration

CU_DEVICE_NUMA_CONFIG_NONE = 0

The GPU is not a NUMA node

CU_DEVICE_NUMA_CONFIG_NUMA_NODE = 1

The GPU is a NUMA node, CU_DEVICE_ATTRIBUTE_NUMA_ID contains its NUMA ID

class cuda.bindings.driver.CUeglFrameType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

CUDA EglFrame type - array or pointer

CU_EGL_FRAME_TYPE_ARRAY = 0

Frame type CUDA array

CU_EGL_FRAME_TYPE_PITCH = 1

Frame type pointer

class cuda.bindings.driver.CUeglResourceLocationFlags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Resource location flags- sysmem or vidmem For CUDA context on iGPU, since video and system memory are equivalent - these flags will not have an effect on the execution. For CUDA context on dGPU, applications can use the flag CUeglResourceLocationFlags to give a hint about the desired location. CU_EGL_RESOURCE_LOCATION_SYSMEM - the frame data is made resident on the system memory to be accessed by CUDA. CU_EGL_RESOURCE_LOCATION_VIDMEM - the frame data is made resident on the dedicated video memory to be accessed by CUDA. There may be an additional latency due to new allocation and data migration, if the frame is produced on a different memory.

CU_EGL_RESOURCE_LOCATION_SYSMEM = 0

Resource location sysmem

CU_EGL_RESOURCE_LOCATION_VIDMEM = 1

Resource location vidmem

class cuda.bindings.driver.CUeglColorFormat(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

CUDA EGL Color Format - The different planar and multiplanar formats currently supported for CUDA_EGL interops. Three channel formats are currently not supported for CU_EGL_FRAME_TYPE_ARRAY

CU_EGL_COLOR_FORMAT_YUV420_PLANAR = 0

Y, U, V in three surfaces, each in a separate surface, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_YUV420_SEMIPLANAR = 1

Y, UV in two surfaces (UV as one surface) with VU byte ordering, width, height ratio same as YUV420Planar.

CU_EGL_COLOR_FORMAT_YUV422_PLANAR = 2

Y, U, V each in a separate surface, U/V width = 1/2 Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YUV422_SEMIPLANAR = 3

Y, UV in two surfaces with VU byte ordering, width, height ratio same as YUV422Planar.

CU_EGL_COLOR_FORMAT_RGB = 4

R/G/B three channels in one surface with BGR byte ordering. Only pitch linear format supported.

CU_EGL_COLOR_FORMAT_BGR = 5

R/G/B three channels in one surface with RGB byte ordering. Only pitch linear format supported.

CU_EGL_COLOR_FORMAT_ARGB = 6

R/G/B/A four channels in one surface with BGRA byte ordering.

CU_EGL_COLOR_FORMAT_RGBA = 7

R/G/B/A four channels in one surface with ABGR byte ordering.

CU_EGL_COLOR_FORMAT_L = 8

single luminance channel in one surface.

CU_EGL_COLOR_FORMAT_R = 9

single color channel in one surface.

CU_EGL_COLOR_FORMAT_YUV444_PLANAR = 10

Y, U, V in three surfaces, each in a separate surface, U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YUV444_SEMIPLANAR = 11

Y, UV in two surfaces (UV as one surface) with VU byte ordering, width, height ratio same as YUV444Planar.

CU_EGL_COLOR_FORMAT_YUYV_422 = 12

Y, U, V in one surface, interleaved as UYVY in one channel.

CU_EGL_COLOR_FORMAT_UYVY_422 = 13

Y, U, V in one surface, interleaved as YUYV in one channel.

CU_EGL_COLOR_FORMAT_ABGR = 14

R/G/B/A four channels in one surface with RGBA byte ordering.

CU_EGL_COLOR_FORMAT_BGRA = 15

R/G/B/A four channels in one surface with ARGB byte ordering.

CU_EGL_COLOR_FORMAT_A = 16

Alpha color format - one channel in one surface.

CU_EGL_COLOR_FORMAT_RG = 17

R/G color format - two channels in one surface with GR byte ordering

CU_EGL_COLOR_FORMAT_AYUV = 18

Y, U, V, A four channels in one surface, interleaved as VUYA.

CU_EGL_COLOR_FORMAT_YVU444_SEMIPLANAR = 19

Y, VU in two surfaces (VU as one surface) with UV byte ordering, U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YVU422_SEMIPLANAR = 20

Y, VU in two surfaces (VU as one surface) with UV byte ordering, U/V width = 1/2 Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YVU420_SEMIPLANAR = 21

Y, VU in two surfaces (VU as one surface) with UV byte ordering, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_Y10V10U10_444_SEMIPLANAR = 22

Y10, V10U10 in two surfaces (VU as one surface) with UV byte ordering, U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_Y10V10U10_420_SEMIPLANAR = 23

Y10, V10U10 in two surfaces (VU as one surface) with UV byte ordering, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_Y12V12U12_444_SEMIPLANAR = 24

Y12, V12U12 in two surfaces (VU as one surface) with UV byte ordering, U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_Y12V12U12_420_SEMIPLANAR = 25

Y12, V12U12 in two surfaces (VU as one surface) with UV byte ordering, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_VYUY_ER = 26

Extended Range Y, U, V in one surface, interleaved as YVYU in one channel.

CU_EGL_COLOR_FORMAT_UYVY_ER = 27

Extended Range Y, U, V in one surface, interleaved as YUYV in one channel.

CU_EGL_COLOR_FORMAT_YUYV_ER = 28

Extended Range Y, U, V in one surface, interleaved as UYVY in one channel.

CU_EGL_COLOR_FORMAT_YVYU_ER = 29

Extended Range Y, U, V in one surface, interleaved as VYUY in one channel.

CU_EGL_COLOR_FORMAT_YUV_ER = 30

Extended Range Y, U, V three channels in one surface, interleaved as VUY. Only pitch linear format supported.

CU_EGL_COLOR_FORMAT_YUVA_ER = 31

Extended Range Y, U, V, A four channels in one surface, interleaved as AVUY.

CU_EGL_COLOR_FORMAT_AYUV_ER = 32

Extended Range Y, U, V, A four channels in one surface, interleaved as VUYA.

CU_EGL_COLOR_FORMAT_YUV444_PLANAR_ER = 33

Extended Range Y, U, V in three surfaces, U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YUV422_PLANAR_ER = 34

Extended Range Y, U, V in three surfaces, U/V width = 1/2 Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YUV420_PLANAR_ER = 35

Extended Range Y, U, V in three surfaces, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_YUV444_SEMIPLANAR_ER = 36

Extended Range Y, UV in two surfaces (UV as one surface) with VU byte ordering, U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YUV422_SEMIPLANAR_ER = 37

Extended Range Y, UV in two surfaces (UV as one surface) with VU byte ordering, U/V width = 1/2 Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YUV420_SEMIPLANAR_ER = 38

Extended Range Y, UV in two surfaces (UV as one surface) with VU byte ordering, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_YVU444_PLANAR_ER = 39

Extended Range Y, V, U in three surfaces, U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YVU422_PLANAR_ER = 40

Extended Range Y, V, U in three surfaces, U/V width = 1/2 Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YVU420_PLANAR_ER = 41

Extended Range Y, V, U in three surfaces, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_YVU444_SEMIPLANAR_ER = 42

Extended Range Y, VU in two surfaces (VU as one surface) with UV byte ordering, U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YVU422_SEMIPLANAR_ER = 43

Extended Range Y, VU in two surfaces (VU as one surface) with UV byte ordering, U/V width = 1/2 Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YVU420_SEMIPLANAR_ER = 44

Extended Range Y, VU in two surfaces (VU as one surface) with UV byte ordering, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_BAYER_RGGB = 45

Bayer format - one channel in one surface with interleaved RGGB ordering.

CU_EGL_COLOR_FORMAT_BAYER_BGGR = 46

Bayer format - one channel in one surface with interleaved BGGR ordering.

CU_EGL_COLOR_FORMAT_BAYER_GRBG = 47

Bayer format - one channel in one surface with interleaved GRBG ordering.

CU_EGL_COLOR_FORMAT_BAYER_GBRG = 48

Bayer format - one channel in one surface with interleaved GBRG ordering.

CU_EGL_COLOR_FORMAT_BAYER10_RGGB = 49

Bayer10 format - one channel in one surface with interleaved RGGB ordering. Out of 16 bits, 10 bits used 6 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER10_BGGR = 50

Bayer10 format - one channel in one surface with interleaved BGGR ordering. Out of 16 bits, 10 bits used 6 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER10_GRBG = 51

Bayer10 format - one channel in one surface with interleaved GRBG ordering. Out of 16 bits, 10 bits used 6 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER10_GBRG = 52

Bayer10 format - one channel in one surface with interleaved GBRG ordering. Out of 16 bits, 10 bits used 6 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER12_RGGB = 53

Bayer12 format - one channel in one surface with interleaved RGGB ordering. Out of 16 bits, 12 bits used 4 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER12_BGGR = 54

Bayer12 format - one channel in one surface with interleaved BGGR ordering. Out of 16 bits, 12 bits used 4 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER12_GRBG = 55

Bayer12 format - one channel in one surface with interleaved GRBG ordering. Out of 16 bits, 12 bits used 4 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER12_GBRG = 56

Bayer12 format - one channel in one surface with interleaved GBRG ordering. Out of 16 bits, 12 bits used 4 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER14_RGGB = 57

Bayer14 format - one channel in one surface with interleaved RGGB ordering. Out of 16 bits, 14 bits used 2 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER14_BGGR = 58

Bayer14 format - one channel in one surface with interleaved BGGR ordering. Out of 16 bits, 14 bits used 2 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER14_GRBG = 59

Bayer14 format - one channel in one surface with interleaved GRBG ordering. Out of 16 bits, 14 bits used 2 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER14_GBRG = 60

Bayer14 format - one channel in one surface with interleaved GBRG ordering. Out of 16 bits, 14 bits used 2 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER20_RGGB = 61

Bayer20 format - one channel in one surface with interleaved RGGB ordering. Out of 32 bits, 20 bits used 12 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER20_BGGR = 62

Bayer20 format - one channel in one surface with interleaved BGGR ordering. Out of 32 bits, 20 bits used 12 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER20_GRBG = 63

Bayer20 format - one channel in one surface with interleaved GRBG ordering. Out of 32 bits, 20 bits used 12 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER20_GBRG = 64

Bayer20 format - one channel in one surface with interleaved GBRG ordering. Out of 32 bits, 20 bits used 12 bits No-op.

CU_EGL_COLOR_FORMAT_YVU444_PLANAR = 65

Y, V, U in three surfaces, each in a separate surface, U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YVU422_PLANAR = 66

Y, V, U in three surfaces, each in a separate surface, U/V width = 1/2 Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_YVU420_PLANAR = 67

Y, V, U in three surfaces, each in a separate surface, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_BAYER_ISP_RGGB = 68

Nvidia proprietary Bayer ISP format - one channel in one surface with interleaved RGGB ordering and mapped to opaque integer datatype.

CU_EGL_COLOR_FORMAT_BAYER_ISP_BGGR = 69

Nvidia proprietary Bayer ISP format - one channel in one surface with interleaved BGGR ordering and mapped to opaque integer datatype.

CU_EGL_COLOR_FORMAT_BAYER_ISP_GRBG = 70

Nvidia proprietary Bayer ISP format - one channel in one surface with interleaved GRBG ordering and mapped to opaque integer datatype.

CU_EGL_COLOR_FORMAT_BAYER_ISP_GBRG = 71

Nvidia proprietary Bayer ISP format - one channel in one surface with interleaved GBRG ordering and mapped to opaque integer datatype.

CU_EGL_COLOR_FORMAT_BAYER_BCCR = 72

Bayer format - one channel in one surface with interleaved BCCR ordering.

CU_EGL_COLOR_FORMAT_BAYER_RCCB = 73

Bayer format - one channel in one surface with interleaved RCCB ordering.

CU_EGL_COLOR_FORMAT_BAYER_CRBC = 74

Bayer format - one channel in one surface with interleaved CRBC ordering.

CU_EGL_COLOR_FORMAT_BAYER_CBRC = 75

Bayer format - one channel in one surface with interleaved CBRC ordering.

CU_EGL_COLOR_FORMAT_BAYER10_CCCC = 76

Bayer10 format - one channel in one surface with interleaved CCCC ordering. Out of 16 bits, 10 bits used 6 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER12_BCCR = 77

Bayer12 format - one channel in one surface with interleaved BCCR ordering. Out of 16 bits, 12 bits used 4 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER12_RCCB = 78

Bayer12 format - one channel in one surface with interleaved RCCB ordering. Out of 16 bits, 12 bits used 4 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER12_CRBC = 79

Bayer12 format - one channel in one surface with interleaved CRBC ordering. Out of 16 bits, 12 bits used 4 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER12_CBRC = 80

Bayer12 format - one channel in one surface with interleaved CBRC ordering. Out of 16 bits, 12 bits used 4 bits No-op.

CU_EGL_COLOR_FORMAT_BAYER12_CCCC = 81

Bayer12 format - one channel in one surface with interleaved CCCC ordering. Out of 16 bits, 12 bits used 4 bits No-op.

CU_EGL_COLOR_FORMAT_Y = 82

Color format for single Y plane.

CU_EGL_COLOR_FORMAT_YUV420_SEMIPLANAR_2020 = 83

Y, UV in two surfaces (UV as one surface) U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_YVU420_SEMIPLANAR_2020 = 84

Y, VU in two surfaces (VU as one surface) U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_YUV420_PLANAR_2020 = 85

Y, U, V each in a separate surface, U/V width = 1/2 Y width, U/V height= 1/2 Y height.

CU_EGL_COLOR_FORMAT_YVU420_PLANAR_2020 = 86

Y, V, U each in a separate surface, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_YUV420_SEMIPLANAR_709 = 87

Y, UV in two surfaces (UV as one surface) U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_YVU420_SEMIPLANAR_709 = 88

Y, VU in two surfaces (VU as one surface) U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_YUV420_PLANAR_709 = 89

Y, U, V each in a separate surface, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_YVU420_PLANAR_709 = 90

Y, V, U each in a separate surface, U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_Y10V10U10_420_SEMIPLANAR_709 = 91

Y10, V10U10 in two surfaces (VU as one surface), U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_Y10V10U10_420_SEMIPLANAR_2020 = 92

Y10, V10U10 in two surfaces (VU as one surface), U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_Y10V10U10_422_SEMIPLANAR_2020 = 93

Y10, V10U10 in two surfaces(VU as one surface) U/V width = 1/2 Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_Y10V10U10_422_SEMIPLANAR = 94

Y10, V10U10 in two surfaces(VU as one surface) U/V width = 1/2 Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_Y10V10U10_422_SEMIPLANAR_709 = 95

Y10, V10U10 in two surfaces(VU as one surface) U/V width = 1/2 Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_Y_ER = 96

Extended Range Color format for single Y plane.

CU_EGL_COLOR_FORMAT_Y_709_ER = 97

Extended Range Color format for single Y plane.

CU_EGL_COLOR_FORMAT_Y10_ER = 98

Extended Range Color format for single Y10 plane.

CU_EGL_COLOR_FORMAT_Y10_709_ER = 99

Extended Range Color format for single Y10 plane.

CU_EGL_COLOR_FORMAT_Y12_ER = 100

Extended Range Color format for single Y12 plane.

CU_EGL_COLOR_FORMAT_Y12_709_ER = 101

Extended Range Color format for single Y12 plane.

CU_EGL_COLOR_FORMAT_YUVA = 102

Y, U, V, A four channels in one surface, interleaved as AVUY.

CU_EGL_COLOR_FORMAT_YUV = 103

Y, U, V three channels in one surface, interleaved as VUY. Only pitch linear format supported.

CU_EGL_COLOR_FORMAT_YVYU = 104

Y, U, V in one surface, interleaved as YVYU in one channel.

CU_EGL_COLOR_FORMAT_VYUY = 105

Y, U, V in one surface, interleaved as VYUY in one channel.

CU_EGL_COLOR_FORMAT_Y10V10U10_420_SEMIPLANAR_ER = 106

Extended Range Y10, V10U10 in two surfaces(VU as one surface) U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_Y10V10U10_420_SEMIPLANAR_709_ER = 107

Extended Range Y10, V10U10 in two surfaces(VU as one surface) U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_Y10V10U10_444_SEMIPLANAR_ER = 108

Extended Range Y10, V10U10 in two surfaces (VU as one surface) U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_Y10V10U10_444_SEMIPLANAR_709_ER = 109

Extended Range Y10, V10U10 in two surfaces (VU as one surface) U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_Y12V12U12_420_SEMIPLANAR_ER = 110

Extended Range Y12, V12U12 in two surfaces (VU as one surface) U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_Y12V12U12_420_SEMIPLANAR_709_ER = 111

Extended Range Y12, V12U12 in two surfaces (VU as one surface) U/V width = 1/2 Y width, U/V height = 1/2 Y height.

CU_EGL_COLOR_FORMAT_Y12V12U12_444_SEMIPLANAR_ER = 112

Extended Range Y12, V12U12 in two surfaces (VU as one surface) U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_Y12V12U12_444_SEMIPLANAR_709_ER = 113

Extended Range Y12, V12U12 in two surfaces (VU as one surface) U/V width = Y width, U/V height = Y height.

CU_EGL_COLOR_FORMAT_MAX = 114
class cuda.bindings.driver.CUdeviceptr_v2

CUDA device pointer CUdeviceptr is defined as an unsigned integer type whose size matches the size of a pointer on the target platform.

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUdeviceptr

CUDA device pointer CUdeviceptr is defined as an unsigned integer type whose size matches the size of a pointer on the target platform.

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUdevice_v1

CUDA device

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUdevice

CUDA device

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUcontext(*args, **kwargs)

A regular context handle

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmodule(*args, **kwargs)

CUDA module

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUfunction(*args, **kwargs)

CUDA function

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlibrary(*args, **kwargs)

CUDA library

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUkernel(*args, **kwargs)

CUDA kernel

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUarray(*args, **kwargs)

CUDA array

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmipmappedArray(*args, **kwargs)

CUDA mipmapped array

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUtexref(*args, **kwargs)

CUDA texture reference

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUsurfref(*args, **kwargs)

CUDA surface reference

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUevent(*args, **kwargs)

CUDA event

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUstream(*args, **kwargs)

CUDA stream

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphicsResource(*args, **kwargs)

CUDA graphics interop resource

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUtexObject_v1

An opaque value that represents a CUDA texture object

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUtexObject

An opaque value that represents a CUDA texture object

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUsurfObject_v1

An opaque value that represents a CUDA surface object

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUsurfObject

An opaque value that represents a CUDA surface object

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUexternalMemory(*args, **kwargs)

CUDA external memory

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUexternalSemaphore(*args, **kwargs)

CUDA external semaphore

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraph(*args, **kwargs)

CUDA graph

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphNode(*args, **kwargs)

CUDA graph node

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphExec(*args, **kwargs)

CUDA executable graph

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemoryPool(*args, **kwargs)

CUDA memory pool

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUuserObject(*args, **kwargs)

CUDA user object for graphs

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphConditionalHandle
getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphDeviceNode(*args, **kwargs)

CUDA graph device node handle

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUasyncCallbackHandle(*args, **kwargs)

CUDA async notification callback handle

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgreenCtx(*args, **kwargs)

A green context handle. This handle can be used safely from only one CPU thread at a time. Created via cuGreenCtxCreate

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUuuid
bytes

< CUDA definition of UUID

Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemFabricHandle_v1

Fabric handle - An opaque handle representing a memory allocation that can be exported to processes in same or different nodes. For IPC between processes on different nodes they must be connected via the NVSwitch fabric.

data
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemFabricHandle

Fabric handle - An opaque handle representing a memory allocation that can be exported to processes in same or different nodes. For IPC between processes on different nodes they must be connected via the NVSwitch fabric.

data
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUipcEventHandle_v1

CUDA IPC event handle

reserved
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUipcEventHandle

CUDA IPC event handle

reserved
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUipcMemHandle_v1

CUDA IPC mem handle

reserved
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUipcMemHandle

CUDA IPC mem handle

reserved
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUstreamBatchMemOpParams_v1

Per-operation parameters for cuStreamBatchMemOp

operation
Type:

CUstreamBatchMemOpType

waitValue
Type:

CUstreamMemOpWaitValueParams_st

writeValue
Type:

CUstreamMemOpWriteValueParams_st

flushRemoteWrites
Type:

CUstreamMemOpFlushRemoteWritesParams_st

memoryBarrier
Type:

CUstreamMemOpMemoryBarrierParams_st

pad
Type:

List[cuuint64_t]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUstreamBatchMemOpParams

Per-operation parameters for cuStreamBatchMemOp

operation
Type:

CUstreamBatchMemOpType

waitValue
Type:

CUstreamMemOpWaitValueParams_st

writeValue
Type:

CUstreamMemOpWriteValueParams_st

flushRemoteWrites
Type:

CUstreamMemOpFlushRemoteWritesParams_st

memoryBarrier
Type:

CUstreamMemOpMemoryBarrierParams_st

pad
Type:

List[cuuint64_t]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_BATCH_MEM_OP_NODE_PARAMS_v1
ctx
Type:

CUcontext

count
Type:

unsigned int

paramArray
Type:

CUstreamBatchMemOpParams

flags
Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_BATCH_MEM_OP_NODE_PARAMS
ctx
Type:

CUcontext

count
Type:

unsigned int

paramArray
Type:

CUstreamBatchMemOpParams

flags
Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_BATCH_MEM_OP_NODE_PARAMS_v2

Batch memory operation node parameters

ctx

Context to use for the operations.

Type:

CUcontext

count

Number of operations in paramArray.

Type:

unsigned int

paramArray

Array of batch memory operations.

Type:

CUstreamBatchMemOpParams

flags

Flags to control the node.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUasyncNotificationInfo

Information passed to the user via the async notification callback

type
Type:

CUasyncNotificationType

info
Type:

anon_union2

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUasyncCallback(*args, **kwargs)
getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUdevprop_v1

Legacy device properties

maxThreadsPerBlock

Maximum number of threads per block

Type:

int

maxThreadsDim

Maximum size of each dimension of a block

Type:

List[int]

maxGridSize

Maximum size of each dimension of a grid

Type:

List[int]

sharedMemPerBlock

Shared memory available per block in bytes

Type:

int

totalConstantMemory

Constant memory available on device in bytes

Type:

int

SIMDWidth

Warp size in threads

Type:

int

memPitch

Maximum pitch in bytes allowed by memory copies

Type:

int

regsPerBlock

32-bit registers available per block

Type:

int

clockRate

Clock frequency in kilohertz

Type:

int

textureAlign

Alignment requirement for textures

Type:

int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUdevprop

Legacy device properties

maxThreadsPerBlock

Maximum number of threads per block

Type:

int

maxThreadsDim

Maximum size of each dimension of a block

Type:

List[int]

maxGridSize

Maximum size of each dimension of a grid

Type:

List[int]

sharedMemPerBlock

Shared memory available per block in bytes

Type:

int

totalConstantMemory

Constant memory available on device in bytes

Type:

int

SIMDWidth

Warp size in threads

Type:

int

memPitch

Maximum pitch in bytes allowed by memory copies

Type:

int

regsPerBlock

32-bit registers available per block

Type:

int

clockRate

Clock frequency in kilohertz

Type:

int

textureAlign

Alignment requirement for textures

Type:

int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlinkState(*args, **kwargs)
getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUhostFn(*args, **kwargs)
getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUaccessPolicyWindow_v1

Specifies an access policy for a window, a contiguous extent of memory beginning at base_ptr and ending at base_ptr + num_bytes. num_bytes is limited by CU_DEVICE_ATTRIBUTE_MAX_ACCESS_POLICY_WINDOW_SIZE. Partition into many segments and assign segments such that: sum of “hit segments” / window == approx. ratio. sum of “miss segments” / window == approx 1-ratio. Segments and ratio specifications are fitted to the capabilities of the architecture. Accesses in a hit segment apply the hitProp access policy. Accesses in a miss segment apply the missProp access policy.

base_ptr

Starting address of the access policy window. CUDA driver may align it.

Type:

Any

num_bytes

Size in bytes of the window policy. CUDA driver may restrict the maximum size and alignment.

Type:

size_t

hitRatio

hitRatio specifies percentage of lines assigned hitProp, rest are assigned missProp.

Type:

float

hitProp

CUaccessProperty set for hit.

Type:

CUaccessProperty

missProp

CUaccessProperty set for miss. Must be either NORMAL or STREAMING

Type:

CUaccessProperty

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUaccessPolicyWindow

Specifies an access policy for a window, a contiguous extent of memory beginning at base_ptr and ending at base_ptr + num_bytes. num_bytes is limited by CU_DEVICE_ATTRIBUTE_MAX_ACCESS_POLICY_WINDOW_SIZE. Partition into many segments and assign segments such that: sum of “hit segments” / window == approx. ratio. sum of “miss segments” / window == approx 1-ratio. Segments and ratio specifications are fitted to the capabilities of the architecture. Accesses in a hit segment apply the hitProp access policy. Accesses in a miss segment apply the missProp access policy.

base_ptr

Starting address of the access policy window. CUDA driver may align it.

Type:

Any

num_bytes

Size in bytes of the window policy. CUDA driver may restrict the maximum size and alignment.

Type:

size_t

hitRatio

hitRatio specifies percentage of lines assigned hitProp, rest are assigned missProp.

Type:

float

hitProp

CUaccessProperty set for hit.

Type:

CUaccessProperty

missProp

CUaccessProperty set for miss. Must be either NORMAL or STREAMING

Type:

CUaccessProperty

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_KERNEL_NODE_PARAMS_v1

GPU kernel node parameters

func

Kernel to launch

Type:

CUfunction

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

kernelParams

Array of pointers to kernel parameters

Type:

Any

extra

Extra options

Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_KERNEL_NODE_PARAMS_v2

GPU kernel node parameters

func

Kernel to launch

Type:

CUfunction

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

kernelParams

Array of pointers to kernel parameters

Type:

Any

extra

Extra options

Type:

Any

kern

Kernel to launch, will only be referenced if func is NULL

Type:

CUkernel

ctx

Context for the kernel task to run in. The value NULL will indicate the current context should be used by the api. This field is ignored if func is set.

Type:

CUcontext

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_KERNEL_NODE_PARAMS

GPU kernel node parameters

func

Kernel to launch

Type:

CUfunction

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

kernelParams

Array of pointers to kernel parameters

Type:

Any

extra

Extra options

Type:

Any

kern

Kernel to launch, will only be referenced if func is NULL

Type:

CUkernel

ctx

Context for the kernel task to run in. The value NULL will indicate the current context should be used by the api. This field is ignored if func is set.

Type:

CUcontext

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_KERNEL_NODE_PARAMS_v3

GPU kernel node parameters

func

Kernel to launch

Type:

CUfunction

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

kernelParams

Array of pointers to kernel parameters

Type:

Any

extra

Extra options

Type:

Any

kern

Kernel to launch, will only be referenced if func is NULL

Type:

CUkernel

ctx

Context for the kernel task to run in. The value NULL will indicate the current context should be used by the api. This field is ignored if func is set.

Type:

CUcontext

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMSET_NODE_PARAMS_v1

Memset node parameters

dst

Destination device pointer

Type:

CUdeviceptr

pitch

Pitch of destination device pointer. Unused if height is 1

Type:

size_t

value

Value to be set

Type:

unsigned int

elementSize

Size of each element in bytes. Must be 1, 2, or 4.

Type:

unsigned int

width

Width of the row in elements

Type:

size_t

height

Number of rows

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMSET_NODE_PARAMS

Memset node parameters

dst

Destination device pointer

Type:

CUdeviceptr

pitch

Pitch of destination device pointer. Unused if height is 1

Type:

size_t

value

Value to be set

Type:

unsigned int

elementSize

Size of each element in bytes. Must be 1, 2, or 4.

Type:

unsigned int

width

Width of the row in elements

Type:

size_t

height

Number of rows

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMSET_NODE_PARAMS_v2

Memset node parameters

dst

Destination device pointer

Type:

CUdeviceptr

pitch

Pitch of destination device pointer. Unused if height is 1

Type:

size_t

value

Value to be set

Type:

unsigned int

elementSize

Size of each element in bytes. Must be 1, 2, or 4.

Type:

unsigned int

width

Width of the row in elements

Type:

size_t

height

Number of rows

Type:

size_t

ctx

Context on which to run the node

Type:

CUcontext

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_HOST_NODE_PARAMS_v1

Host node parameters

fn

The function to call when the node executes

Type:

CUhostFn

userData

Argument to pass to the function

Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_HOST_NODE_PARAMS

Host node parameters

fn

The function to call when the node executes

Type:

CUhostFn

userData

Argument to pass to the function

Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_HOST_NODE_PARAMS_v2

Host node parameters

fn

The function to call when the node executes

Type:

CUhostFn

userData

Argument to pass to the function

Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphEdgeData

Optional annotation for edges in a CUDA graph. Note, all edges implicitly have annotations and default to a zero-initialized value if not specified. A zero-initialized struct indicates a standard full serialization of two nodes with memory visibility.

from_port

This indicates when the dependency is triggered from the upstream node on the edge. The meaning is specfic to the node type. A value of 0 in all cases means full completion of the upstream node, with memory visibility to the downstream node or portion thereof (indicated by to_port). Only kernel nodes define non-zero ports. A kernel node can use the following output port types: CU_GRAPH_KERNEL_NODE_PORT_DEFAULT, CU_GRAPH_KERNEL_NODE_PORT_PROGRAMMATIC, or CU_GRAPH_KERNEL_NODE_PORT_LAUNCH_ORDER.

Type:

bytes

to_port

This indicates what portion of the downstream node is dependent on the upstream node or portion thereof (indicated by from_port). The meaning is specific to the node type. A value of 0 in all cases means the entirety of the downstream node is dependent on the upstream work. Currently no node types define non-zero ports. Accordingly, this field must be set to zero.

Type:

bytes

type

This should be populated with a value from CUgraphDependencyType. (It is typed as char due to compiler-specific layout of bitfields.) See CUgraphDependencyType.

Type:

bytes

reserved

These bytes are unused and must be zeroed. This ensures compatibility if additional fields are added in the future.

Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_GRAPH_INSTANTIATE_PARAMS

Graph instantiation parameters

flags

Instantiation flags

Type:

cuuint64_t

hUploadStream

Upload stream

Type:

CUstream

hErrNode_out

The node which caused instantiation to fail, if any

Type:

CUgraphNode

result_out

Whether instantiation was successful. If it failed, the reason why

Type:

CUgraphInstantiateResult

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlaunchMemSyncDomainMap

Memory Synchronization Domain map See ::cudaLaunchMemSyncDomain. By default, kernels are launched in domain 0. Kernel launched with CU_LAUNCH_MEM_SYNC_DOMAIN_REMOTE will have a different domain ID. User may also alter the domain ID with CUlaunchMemSyncDomainMap for a specific stream / graph node / kernel launch. See CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP. Domain ID range is available through CU_DEVICE_ATTRIBUTE_MEM_SYNC_DOMAIN_COUNT.

default_

The default domain ID to use for designated kernels

Type:

bytes

remote

The remote domain ID to use for designated kernels

Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlaunchAttributeValue

Launch attributes union; used as value field of CUlaunchAttribute

pad
Type:

bytes

accessPolicyWindow

Value of launch attribute CU_LAUNCH_ATTRIBUTE_ACCESS_POLICY_WINDOW.

Type:

CUaccessPolicyWindow

cooperative

Value of launch attribute CU_LAUNCH_ATTRIBUTE_COOPERATIVE. Nonzero indicates a cooperative kernel (see cuLaunchCooperativeKernel).

Type:

int

syncPolicy

Value of launch attribute CU_LAUNCH_ATTRIBUTE_SYNCHRONIZATION_POLICY. ::CUsynchronizationPolicy for work queued up in this stream

Type:

CUsynchronizationPolicy

clusterDim

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_DIMENSION that represents the desired cluster dimensions for the kernel. Opaque type with the following fields: - x - The X dimension of the cluster, in blocks. Must be a divisor of the grid X dimension. - y - The Y dimension of the cluster, in blocks. Must be a divisor of the grid Y dimension. - z - The Z dimension of the cluster, in blocks. Must be a divisor of the grid Z dimension.

Type:

anon_struct1

clusterSchedulingPolicyPreference

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE. Cluster scheduling policy preference for the kernel.

Type:

CUclusterSchedulingPolicy

programmaticStreamSerializationAllowed

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_STREAM_SERIALIZATION.

Type:

int

programmaticEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_EVENT with the following fields: - CUevent event - Event to fire when all blocks trigger it. - Event record flags, see cuEventRecordWithFlags. Does not accept :CU_EVENT_RECORD_EXTERNAL. - triggerAtBlockStart - If this is set to non-0, each block launch will automatically trigger the event.

Type:

anon_struct2

launchCompletionEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_LAUNCH_COMPLETION_EVENT with the following fields: - CUevent event - Event to fire when the last block launches - int flags; - Event record flags, see cuEventRecordWithFlags. Does not accept CU_EVENT_RECORD_EXTERNAL.

Type:

anon_struct3

priority

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PRIORITY. Execution priority of the kernel.

Type:

int

memSyncDomainMap

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP. See CUlaunchMemSyncDomainMap.

Type:

CUlaunchMemSyncDomainMap

memSyncDomain

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN. See::CUlaunchMemSyncDomain

Type:

CUlaunchMemSyncDomain

deviceUpdatableKernelNode

Value of launch attribute CU_LAUNCH_ATTRIBUTE_DEVICE_UPDATABLE_KERNEL_NODE. with the following fields: - int deviceUpdatable - Whether or not the resulting kernel node should be device-updatable. - CUgraphDeviceNode devNode - Returns a handle to pass to the various device-side update functions.

Type:

anon_struct4

sharedMemCarveout

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlaunchAttribute

Launch attribute

id

Attribute to set

Type:

CUlaunchAttributeID

value

Value of the attribute

Type:

CUlaunchAttributeValue

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlaunchConfig

CUDA extensible launch configuration

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

hStream

Stream identifier

Type:

CUstream

attrs

List of attributes; nullable if CUlaunchConfig::numAttrs == 0

Type:

CUlaunchAttribute

numAttrs

Number of attributes populated in CUlaunchConfig::attrs

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUkernelNodeAttrID(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Launch attributes enum; used as id field of CUlaunchAttribute

class cuda.bindings.driver.CUkernelNodeAttrValue_v1

Launch attributes union; used as value field of CUlaunchAttribute

pad
Type:

bytes

accessPolicyWindow

Value of launch attribute CU_LAUNCH_ATTRIBUTE_ACCESS_POLICY_WINDOW.

Type:

CUaccessPolicyWindow

cooperative

Value of launch attribute CU_LAUNCH_ATTRIBUTE_COOPERATIVE. Nonzero indicates a cooperative kernel (see cuLaunchCooperativeKernel).

Type:

int

syncPolicy

Value of launch attribute CU_LAUNCH_ATTRIBUTE_SYNCHRONIZATION_POLICY. ::CUsynchronizationPolicy for work queued up in this stream

Type:

CUsynchronizationPolicy

clusterDim

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_DIMENSION that represents the desired cluster dimensions for the kernel. Opaque type with the following fields: - x - The X dimension of the cluster, in blocks. Must be a divisor of the grid X dimension. - y - The Y dimension of the cluster, in blocks. Must be a divisor of the grid Y dimension. - z - The Z dimension of the cluster, in blocks. Must be a divisor of the grid Z dimension.

Type:

anon_struct1

clusterSchedulingPolicyPreference

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE. Cluster scheduling policy preference for the kernel.

Type:

CUclusterSchedulingPolicy

programmaticStreamSerializationAllowed

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_STREAM_SERIALIZATION.

Type:

int

programmaticEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_EVENT with the following fields: - CUevent event - Event to fire when all blocks trigger it. - Event record flags, see cuEventRecordWithFlags. Does not accept :CU_EVENT_RECORD_EXTERNAL. - triggerAtBlockStart - If this is set to non-0, each block launch will automatically trigger the event.

Type:

anon_struct2

launchCompletionEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_LAUNCH_COMPLETION_EVENT with the following fields: - CUevent event - Event to fire when the last block launches - int flags; - Event record flags, see cuEventRecordWithFlags. Does not accept CU_EVENT_RECORD_EXTERNAL.

Type:

anon_struct3

priority

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PRIORITY. Execution priority of the kernel.

Type:

int

memSyncDomainMap

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP. See CUlaunchMemSyncDomainMap.

Type:

CUlaunchMemSyncDomainMap

memSyncDomain

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN. See::CUlaunchMemSyncDomain

Type:

CUlaunchMemSyncDomain

deviceUpdatableKernelNode

Value of launch attribute CU_LAUNCH_ATTRIBUTE_DEVICE_UPDATABLE_KERNEL_NODE. with the following fields: - int deviceUpdatable - Whether or not the resulting kernel node should be device-updatable. - CUgraphDeviceNode devNode - Returns a handle to pass to the various device-side update functions.

Type:

anon_struct4

sharedMemCarveout

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUkernelNodeAttrValue

Launch attributes union; used as value field of CUlaunchAttribute

pad
Type:

bytes

accessPolicyWindow

Value of launch attribute CU_LAUNCH_ATTRIBUTE_ACCESS_POLICY_WINDOW.

Type:

CUaccessPolicyWindow

cooperative

Value of launch attribute CU_LAUNCH_ATTRIBUTE_COOPERATIVE. Nonzero indicates a cooperative kernel (see cuLaunchCooperativeKernel).

Type:

int

syncPolicy

Value of launch attribute CU_LAUNCH_ATTRIBUTE_SYNCHRONIZATION_POLICY. ::CUsynchronizationPolicy for work queued up in this stream

Type:

CUsynchronizationPolicy

clusterDim

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_DIMENSION that represents the desired cluster dimensions for the kernel. Opaque type with the following fields: - x - The X dimension of the cluster, in blocks. Must be a divisor of the grid X dimension. - y - The Y dimension of the cluster, in blocks. Must be a divisor of the grid Y dimension. - z - The Z dimension of the cluster, in blocks. Must be a divisor of the grid Z dimension.

Type:

anon_struct1

clusterSchedulingPolicyPreference

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE. Cluster scheduling policy preference for the kernel.

Type:

CUclusterSchedulingPolicy

programmaticStreamSerializationAllowed

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_STREAM_SERIALIZATION.

Type:

int

programmaticEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_EVENT with the following fields: - CUevent event - Event to fire when all blocks trigger it. - Event record flags, see cuEventRecordWithFlags. Does not accept :CU_EVENT_RECORD_EXTERNAL. - triggerAtBlockStart - If this is set to non-0, each block launch will automatically trigger the event.

Type:

anon_struct2

launchCompletionEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_LAUNCH_COMPLETION_EVENT with the following fields: - CUevent event - Event to fire when the last block launches - int flags; - Event record flags, see cuEventRecordWithFlags. Does not accept CU_EVENT_RECORD_EXTERNAL.

Type:

anon_struct3

priority

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PRIORITY. Execution priority of the kernel.

Type:

int

memSyncDomainMap

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP. See CUlaunchMemSyncDomainMap.

Type:

CUlaunchMemSyncDomainMap

memSyncDomain

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN. See::CUlaunchMemSyncDomain

Type:

CUlaunchMemSyncDomain

deviceUpdatableKernelNode

Value of launch attribute CU_LAUNCH_ATTRIBUTE_DEVICE_UPDATABLE_KERNEL_NODE. with the following fields: - int deviceUpdatable - Whether or not the resulting kernel node should be device-updatable. - CUgraphDeviceNode devNode - Returns a handle to pass to the various device-side update functions.

Type:

anon_struct4

sharedMemCarveout

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUstreamAttrID(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Launch attributes enum; used as id field of CUlaunchAttribute

class cuda.bindings.driver.CUstreamAttrValue_v1

Launch attributes union; used as value field of CUlaunchAttribute

pad
Type:

bytes

accessPolicyWindow

Value of launch attribute CU_LAUNCH_ATTRIBUTE_ACCESS_POLICY_WINDOW.

Type:

CUaccessPolicyWindow

cooperative

Value of launch attribute CU_LAUNCH_ATTRIBUTE_COOPERATIVE. Nonzero indicates a cooperative kernel (see cuLaunchCooperativeKernel).

Type:

int

syncPolicy

Value of launch attribute CU_LAUNCH_ATTRIBUTE_SYNCHRONIZATION_POLICY. ::CUsynchronizationPolicy for work queued up in this stream

Type:

CUsynchronizationPolicy

clusterDim

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_DIMENSION that represents the desired cluster dimensions for the kernel. Opaque type with the following fields: - x - The X dimension of the cluster, in blocks. Must be a divisor of the grid X dimension. - y - The Y dimension of the cluster, in blocks. Must be a divisor of the grid Y dimension. - z - The Z dimension of the cluster, in blocks. Must be a divisor of the grid Z dimension.

Type:

anon_struct1

clusterSchedulingPolicyPreference

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE. Cluster scheduling policy preference for the kernel.

Type:

CUclusterSchedulingPolicy

programmaticStreamSerializationAllowed

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_STREAM_SERIALIZATION.

Type:

int

programmaticEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_EVENT with the following fields: - CUevent event - Event to fire when all blocks trigger it. - Event record flags, see cuEventRecordWithFlags. Does not accept :CU_EVENT_RECORD_EXTERNAL. - triggerAtBlockStart - If this is set to non-0, each block launch will automatically trigger the event.

Type:

anon_struct2

launchCompletionEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_LAUNCH_COMPLETION_EVENT with the following fields: - CUevent event - Event to fire when the last block launches - int flags; - Event record flags, see cuEventRecordWithFlags. Does not accept CU_EVENT_RECORD_EXTERNAL.

Type:

anon_struct3

priority

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PRIORITY. Execution priority of the kernel.

Type:

int

memSyncDomainMap

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP. See CUlaunchMemSyncDomainMap.

Type:

CUlaunchMemSyncDomainMap

memSyncDomain

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN. See::CUlaunchMemSyncDomain

Type:

CUlaunchMemSyncDomain

deviceUpdatableKernelNode

Value of launch attribute CU_LAUNCH_ATTRIBUTE_DEVICE_UPDATABLE_KERNEL_NODE. with the following fields: - int deviceUpdatable - Whether or not the resulting kernel node should be device-updatable. - CUgraphDeviceNode devNode - Returns a handle to pass to the various device-side update functions.

Type:

anon_struct4

sharedMemCarveout

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUstreamAttrValue

Launch attributes union; used as value field of CUlaunchAttribute

pad
Type:

bytes

accessPolicyWindow

Value of launch attribute CU_LAUNCH_ATTRIBUTE_ACCESS_POLICY_WINDOW.

Type:

CUaccessPolicyWindow

cooperative

Value of launch attribute CU_LAUNCH_ATTRIBUTE_COOPERATIVE. Nonzero indicates a cooperative kernel (see cuLaunchCooperativeKernel).

Type:

int

syncPolicy

Value of launch attribute CU_LAUNCH_ATTRIBUTE_SYNCHRONIZATION_POLICY. ::CUsynchronizationPolicy for work queued up in this stream

Type:

CUsynchronizationPolicy

clusterDim

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_DIMENSION that represents the desired cluster dimensions for the kernel. Opaque type with the following fields: - x - The X dimension of the cluster, in blocks. Must be a divisor of the grid X dimension. - y - The Y dimension of the cluster, in blocks. Must be a divisor of the grid Y dimension. - z - The Z dimension of the cluster, in blocks. Must be a divisor of the grid Z dimension.

Type:

anon_struct1

clusterSchedulingPolicyPreference

Value of launch attribute CU_LAUNCH_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE. Cluster scheduling policy preference for the kernel.

Type:

CUclusterSchedulingPolicy

programmaticStreamSerializationAllowed

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_STREAM_SERIALIZATION.

Type:

int

programmaticEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_EVENT with the following fields: - CUevent event - Event to fire when all blocks trigger it. - Event record flags, see cuEventRecordWithFlags. Does not accept :CU_EVENT_RECORD_EXTERNAL. - triggerAtBlockStart - If this is set to non-0, each block launch will automatically trigger the event.

Type:

anon_struct2

launchCompletionEvent

Value of launch attribute CU_LAUNCH_ATTRIBUTE_LAUNCH_COMPLETION_EVENT with the following fields: - CUevent event - Event to fire when the last block launches - int flags; - Event record flags, see cuEventRecordWithFlags. Does not accept CU_EVENT_RECORD_EXTERNAL.

Type:

anon_struct3

priority

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PRIORITY. Execution priority of the kernel.

Type:

int

memSyncDomainMap

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP. See CUlaunchMemSyncDomainMap.

Type:

CUlaunchMemSyncDomainMap

memSyncDomain

Value of launch attribute CU_LAUNCH_ATTRIBUTE_MEM_SYNC_DOMAIN. See::CUlaunchMemSyncDomain

Type:

CUlaunchMemSyncDomain

deviceUpdatableKernelNode

Value of launch attribute CU_LAUNCH_ATTRIBUTE_DEVICE_UPDATABLE_KERNEL_NODE. with the following fields: - int deviceUpdatable - Whether or not the resulting kernel node should be device-updatable. - CUgraphDeviceNode devNode - Returns a handle to pass to the various device-side update functions.

Type:

anon_struct4

sharedMemCarveout

Value of launch attribute CU_LAUNCH_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUexecAffinitySmCount_v1

Value for CU_EXEC_AFFINITY_TYPE_SM_COUNT

val

The number of SMs the context is limited to use.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUexecAffinitySmCount

Value for CU_EXEC_AFFINITY_TYPE_SM_COUNT

val

The number of SMs the context is limited to use.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUexecAffinityParam_v1

Execution Affinity Parameters

type
Type:

CUexecAffinityType

param
Type:

anon_union3

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUexecAffinityParam

Execution Affinity Parameters

type
Type:

CUexecAffinityType

param
Type:

anon_union3

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUctxCigParam

CIG Context Create Params

sharedDataType
Type:

CUcigDataType

sharedData
Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUctxCreateParams

Params for creating CUDA context Exactly one of execAffinityParams and cigParams must be non-NULL.

execAffinityParams
Type:

CUexecAffinityParam

numExecAffinityParams
Type:

int

cigParams
Type:

CUctxCigParam

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUlibraryHostUniversalFunctionAndDataTable
functionTable
Type:

Any

functionWindowSize
Type:

size_t

dataTable
Type:

Any

dataWindowSize
Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUstreamCallback(*args, **kwargs)
getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUoccupancyB2DSize(*args, **kwargs)
getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY2D_v2

2D memory copy parameters

srcXInBytes

Source X in bytes

Type:

size_t

srcY

Source Y

Type:

size_t

srcMemoryType

Source memory type (host, device, array)

Type:

CUmemorytype

srcHost

Source host pointer

Type:

Any

srcDevice

Source device pointer

Type:

CUdeviceptr

srcArray

Source array reference

Type:

CUarray

srcPitch

Source pitch (ignored when src is array)

Type:

size_t

dstXInBytes

Destination X in bytes

Type:

size_t

dstY

Destination Y

Type:

size_t

dstMemoryType

Destination memory type (host, device, array)

Type:

CUmemorytype

dstHost

Destination host pointer

Type:

Any

dstDevice

Destination device pointer

Type:

CUdeviceptr

dstArray

Destination array reference

Type:

CUarray

dstPitch

Destination pitch (ignored when dst is array)

Type:

size_t

WidthInBytes

Width of 2D memory copy in bytes

Type:

size_t

Height

Height of 2D memory copy

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY2D

2D memory copy parameters

srcXInBytes

Source X in bytes

Type:

size_t

srcY

Source Y

Type:

size_t

srcMemoryType

Source memory type (host, device, array)

Type:

CUmemorytype

srcHost

Source host pointer

Type:

Any

srcDevice

Source device pointer

Type:

CUdeviceptr

srcArray

Source array reference

Type:

CUarray

srcPitch

Source pitch (ignored when src is array)

Type:

size_t

dstXInBytes

Destination X in bytes

Type:

size_t

dstY

Destination Y

Type:

size_t

dstMemoryType

Destination memory type (host, device, array)

Type:

CUmemorytype

dstHost

Destination host pointer

Type:

Any

dstDevice

Destination device pointer

Type:

CUdeviceptr

dstArray

Destination array reference

Type:

CUarray

dstPitch

Destination pitch (ignored when dst is array)

Type:

size_t

WidthInBytes

Width of 2D memory copy in bytes

Type:

size_t

Height

Height of 2D memory copy

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY3D_v2

3D memory copy parameters

srcXInBytes

Source X in bytes

Type:

size_t

srcY

Source Y

Type:

size_t

srcZ

Source Z

Type:

size_t

srcLOD

Source LOD

Type:

size_t

srcMemoryType

Source memory type (host, device, array)

Type:

CUmemorytype

srcHost

Source host pointer

Type:

Any

srcDevice

Source device pointer

Type:

CUdeviceptr

srcArray

Source array reference

Type:

CUarray

reserved0

Must be NULL

Type:

Any

srcPitch

Source pitch (ignored when src is array)

Type:

size_t

srcHeight

Source height (ignored when src is array; may be 0 if Depth==1)

Type:

size_t

dstXInBytes

Destination X in bytes

Type:

size_t

dstY

Destination Y

Type:

size_t

dstZ

Destination Z

Type:

size_t

dstLOD

Destination LOD

Type:

size_t

dstMemoryType

Destination memory type (host, device, array)

Type:

CUmemorytype

dstHost

Destination host pointer

Type:

Any

dstDevice

Destination device pointer

Type:

CUdeviceptr

dstArray

Destination array reference

Type:

CUarray

reserved1

Must be NULL

Type:

Any

dstPitch

Destination pitch (ignored when dst is array)

Type:

size_t

dstHeight

Destination height (ignored when dst is array; may be 0 if Depth==1)

Type:

size_t

WidthInBytes

Width of 3D memory copy in bytes

Type:

size_t

Height

Height of 3D memory copy

Type:

size_t

Depth

Depth of 3D memory copy

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY3D

3D memory copy parameters

srcXInBytes

Source X in bytes

Type:

size_t

srcY

Source Y

Type:

size_t

srcZ

Source Z

Type:

size_t

srcLOD

Source LOD

Type:

size_t

srcMemoryType

Source memory type (host, device, array)

Type:

CUmemorytype

srcHost

Source host pointer

Type:

Any

srcDevice

Source device pointer

Type:

CUdeviceptr

srcArray

Source array reference

Type:

CUarray

reserved0

Must be NULL

Type:

Any

srcPitch

Source pitch (ignored when src is array)

Type:

size_t

srcHeight

Source height (ignored when src is array; may be 0 if Depth==1)

Type:

size_t

dstXInBytes

Destination X in bytes

Type:

size_t

dstY

Destination Y

Type:

size_t

dstZ

Destination Z

Type:

size_t

dstLOD

Destination LOD

Type:

size_t

dstMemoryType

Destination memory type (host, device, array)

Type:

CUmemorytype

dstHost

Destination host pointer

Type:

Any

dstDevice

Destination device pointer

Type:

CUdeviceptr

dstArray

Destination array reference

Type:

CUarray

reserved1

Must be NULL

Type:

Any

dstPitch

Destination pitch (ignored when dst is array)

Type:

size_t

dstHeight

Destination height (ignored when dst is array; may be 0 if Depth==1)

Type:

size_t

WidthInBytes

Width of 3D memory copy in bytes

Type:

size_t

Height

Height of 3D memory copy

Type:

size_t

Depth

Depth of 3D memory copy

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY3D_PEER_v1

3D memory cross-context copy parameters

srcXInBytes

Source X in bytes

Type:

size_t

srcY

Source Y

Type:

size_t

srcZ

Source Z

Type:

size_t

srcLOD

Source LOD

Type:

size_t

srcMemoryType

Source memory type (host, device, array)

Type:

CUmemorytype

srcHost

Source host pointer

Type:

Any

srcDevice

Source device pointer

Type:

CUdeviceptr

srcArray

Source array reference

Type:

CUarray

srcContext

Source context (ignored with srcMemoryType is CU_MEMORYTYPE_ARRAY)

Type:

CUcontext

srcPitch

Source pitch (ignored when src is array)

Type:

size_t

srcHeight

Source height (ignored when src is array; may be 0 if Depth==1)

Type:

size_t

dstXInBytes

Destination X in bytes

Type:

size_t

dstY

Destination Y

Type:

size_t

dstZ

Destination Z

Type:

size_t

dstLOD

Destination LOD

Type:

size_t

dstMemoryType

Destination memory type (host, device, array)

Type:

CUmemorytype

dstHost

Destination host pointer

Type:

Any

dstDevice

Destination device pointer

Type:

CUdeviceptr

dstArray

Destination array reference

Type:

CUarray

dstContext

Destination context (ignored with dstMemoryType is CU_MEMORYTYPE_ARRAY)

Type:

CUcontext

dstPitch

Destination pitch (ignored when dst is array)

Type:

size_t

dstHeight

Destination height (ignored when dst is array; may be 0 if Depth==1)

Type:

size_t

WidthInBytes

Width of 3D memory copy in bytes

Type:

size_t

Height

Height of 3D memory copy

Type:

size_t

Depth

Depth of 3D memory copy

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY3D_PEER

3D memory cross-context copy parameters

srcXInBytes

Source X in bytes

Type:

size_t

srcY

Source Y

Type:

size_t

srcZ

Source Z

Type:

size_t

srcLOD

Source LOD

Type:

size_t

srcMemoryType

Source memory type (host, device, array)

Type:

CUmemorytype

srcHost

Source host pointer

Type:

Any

srcDevice

Source device pointer

Type:

CUdeviceptr

srcArray

Source array reference

Type:

CUarray

srcContext

Source context (ignored with srcMemoryType is CU_MEMORYTYPE_ARRAY)

Type:

CUcontext

srcPitch

Source pitch (ignored when src is array)

Type:

size_t

srcHeight

Source height (ignored when src is array; may be 0 if Depth==1)

Type:

size_t

dstXInBytes

Destination X in bytes

Type:

size_t

dstY

Destination Y

Type:

size_t

dstZ

Destination Z

Type:

size_t

dstLOD

Destination LOD

Type:

size_t

dstMemoryType

Destination memory type (host, device, array)

Type:

CUmemorytype

dstHost

Destination host pointer

Type:

Any

dstDevice

Destination device pointer

Type:

CUdeviceptr

dstArray

Destination array reference

Type:

CUarray

dstContext

Destination context (ignored with dstMemoryType is CU_MEMORYTYPE_ARRAY)

Type:

CUcontext

dstPitch

Destination pitch (ignored when dst is array)

Type:

size_t

dstHeight

Destination height (ignored when dst is array; may be 0 if Depth==1)

Type:

size_t

WidthInBytes

Width of 3D memory copy in bytes

Type:

size_t

Height

Height of 3D memory copy

Type:

size_t

Depth

Depth of 3D memory copy

Type:

size_t

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEMCPY_NODE_PARAMS

Memcpy node parameters

flags

Must be zero

Type:

int

reserved

Must be zero

Type:

int

copyCtx

Context on which to run the node

Type:

CUcontext

copyParams

Parameters for the memory copy

Type:

CUDA_MEMCPY3D

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY_DESCRIPTOR_v2

Array descriptor

Width

Width of array

Type:

size_t

Height

Height of array

Type:

size_t

Format

Array format

Type:

CUarray_format

NumChannels

Channels per array element

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY_DESCRIPTOR

Array descriptor

Width

Width of array

Type:

size_t

Height

Height of array

Type:

size_t

Format

Array format

Type:

CUarray_format

NumChannels

Channels per array element

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY3D_DESCRIPTOR_v2

3D array descriptor

Width

Width of 3D array

Type:

size_t

Height

Height of 3D array

Type:

size_t

Depth

Depth of 3D array

Type:

size_t

Format

Array format

Type:

CUarray_format

NumChannels

Channels per array element

Type:

unsigned int

Flags

Flags

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY3D_DESCRIPTOR

3D array descriptor

Width

Width of 3D array

Type:

size_t

Height

Height of 3D array

Type:

size_t

Depth

Depth of 3D array

Type:

size_t

Format

Array format

Type:

CUarray_format

NumChannels

Channels per array element

Type:

unsigned int

Flags

Flags

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY_SPARSE_PROPERTIES_v1

CUDA array sparse properties

tileExtent
Type:

anon_struct5

miptailFirstLevel

First mip level at which the mip tail begins.

Type:

unsigned int

miptailSize

Total size of the mip tail.

Type:

unsigned long long

flags

Flags will either be zero or CU_ARRAY_SPARSE_PROPERTIES_SINGLE_MIPTAIL

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY_SPARSE_PROPERTIES

CUDA array sparse properties

tileExtent
Type:

anon_struct5

miptailFirstLevel

First mip level at which the mip tail begins.

Type:

unsigned int

miptailSize

Total size of the mip tail.

Type:

unsigned long long

flags

Flags will either be zero or CU_ARRAY_SPARSE_PROPERTIES_SINGLE_MIPTAIL

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY_MEMORY_REQUIREMENTS_v1

CUDA array memory requirements

size

Total required memory size

Type:

size_t

alignment

alignment requirement

Type:

size_t

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_ARRAY_MEMORY_REQUIREMENTS

CUDA array memory requirements

size

Total required memory size

Type:

size_t

alignment

alignment requirement

Type:

size_t

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_RESOURCE_DESC_v1

CUDA Resource descriptor

resType

Resource type

Type:

CUresourcetype

res
Type:

anon_union4

flags

Flags (must be zero)

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_RESOURCE_DESC

CUDA Resource descriptor

resType

Resource type

Type:

CUresourcetype

res
Type:

anon_union4

flags

Flags (must be zero)

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_TEXTURE_DESC_v1

Texture descriptor

addressMode

Address modes

Type:

List[CUaddress_mode]

filterMode

Filter mode

Type:

CUfilter_mode

flags

Flags

Type:

unsigned int

maxAnisotropy

Maximum anisotropy ratio

Type:

unsigned int

mipmapFilterMode

Mipmap filter mode

Type:

CUfilter_mode

mipmapLevelBias

Mipmap level bias

Type:

float

minMipmapLevelClamp

Mipmap minimum level clamp

Type:

float

maxMipmapLevelClamp

Mipmap maximum level clamp

Type:

float

borderColor

Border Color

Type:

List[float]

reserved
Type:

List[int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_TEXTURE_DESC

Texture descriptor

addressMode

Address modes

Type:

List[CUaddress_mode]

filterMode

Filter mode

Type:

CUfilter_mode

flags

Flags

Type:

unsigned int

maxAnisotropy

Maximum anisotropy ratio

Type:

unsigned int

mipmapFilterMode

Mipmap filter mode

Type:

CUfilter_mode

mipmapLevelBias

Mipmap level bias

Type:

float

minMipmapLevelClamp

Mipmap minimum level clamp

Type:

float

maxMipmapLevelClamp

Mipmap maximum level clamp

Type:

float

borderColor

Border Color

Type:

List[float]

reserved
Type:

List[int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_RESOURCE_VIEW_DESC_v1

Resource view descriptor

format

Resource view format

Type:

CUresourceViewFormat

width

Width of the resource view

Type:

size_t

height

Height of the resource view

Type:

size_t

depth

Depth of the resource view

Type:

size_t

firstMipmapLevel

First defined mipmap level

Type:

unsigned int

lastMipmapLevel

Last defined mipmap level

Type:

unsigned int

firstLayer

First layer index

Type:

unsigned int

lastLayer

Last layer index

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_RESOURCE_VIEW_DESC

Resource view descriptor

format

Resource view format

Type:

CUresourceViewFormat

width

Width of the resource view

Type:

size_t

height

Height of the resource view

Type:

size_t

depth

Depth of the resource view

Type:

size_t

firstMipmapLevel

First defined mipmap level

Type:

unsigned int

lastMipmapLevel

Last defined mipmap level

Type:

unsigned int

firstLayer

First layer index

Type:

unsigned int

lastLayer

Last layer index

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUtensorMap

Tensor map descriptor. Requires compiler support for aligning to 64 bytes.

opaque
Type:

List[cuuint64_t]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_POINTER_ATTRIBUTE_P2P_TOKENS_v1

GPU Direct v3 tokens

p2pToken
Type:

unsigned long long

vaSpaceToken
Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_POINTER_ATTRIBUTE_P2P_TOKENS

GPU Direct v3 tokens

p2pToken
Type:

unsigned long long

vaSpaceToken
Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_LAUNCH_PARAMS_v1

Kernel launch parameters

function

Kernel to launch

Type:

CUfunction

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

hStream

Stream identifier

Type:

CUstream

kernelParams

Array of pointers to kernel parameters

Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_LAUNCH_PARAMS

Kernel launch parameters

function

Kernel to launch

Type:

CUfunction

gridDimX

Width of grid in blocks

Type:

unsigned int

gridDimY

Height of grid in blocks

Type:

unsigned int

gridDimZ

Depth of grid in blocks

Type:

unsigned int

blockDimX

X dimension of each thread block

Type:

unsigned int

blockDimY

Y dimension of each thread block

Type:

unsigned int

blockDimZ

Z dimension of each thread block

Type:

unsigned int

sharedMemBytes

Dynamic shared-memory size per thread block in bytes

Type:

unsigned int

hStream

Stream identifier

Type:

CUstream

kernelParams

Array of pointers to kernel parameters

Type:

Any

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_MEMORY_HANDLE_DESC_v1

External memory handle descriptor

type

Type of the handle

Type:

CUexternalMemoryHandleType

handle
Type:

anon_union5

size

Size of the memory allocation

Type:

unsigned long long

flags

Flags must either be zero or CUDA_EXTERNAL_MEMORY_DEDICATED

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_MEMORY_HANDLE_DESC

External memory handle descriptor

type

Type of the handle

Type:

CUexternalMemoryHandleType

handle
Type:

anon_union5

size

Size of the memory allocation

Type:

unsigned long long

flags

Flags must either be zero or CUDA_EXTERNAL_MEMORY_DEDICATED

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_MEMORY_BUFFER_DESC_v1

External memory buffer descriptor

offset

Offset into the memory object where the buffer’s base is

Type:

unsigned long long

size

Size of the buffer

Type:

unsigned long long

flags

Flags reserved for future use. Must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_MEMORY_BUFFER_DESC

External memory buffer descriptor

offset

Offset into the memory object where the buffer’s base is

Type:

unsigned long long

size

Size of the buffer

Type:

unsigned long long

flags

Flags reserved for future use. Must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_MEMORY_MIPMAPPED_ARRAY_DESC_v1

External memory mipmap descriptor

offset

Offset into the memory object where the base level of the mipmap chain is.

Type:

unsigned long long

arrayDesc

Format, dimension and type of base level of the mipmap chain

Type:

CUDA_ARRAY3D_DESCRIPTOR

numLevels

Total number of levels in the mipmap chain

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_MEMORY_MIPMAPPED_ARRAY_DESC

External memory mipmap descriptor

offset

Offset into the memory object where the base level of the mipmap chain is.

Type:

unsigned long long

arrayDesc

Format, dimension and type of base level of the mipmap chain

Type:

CUDA_ARRAY3D_DESCRIPTOR

numLevels

Total number of levels in the mipmap chain

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC_v1

External semaphore handle descriptor

type

Type of the handle

Type:

CUexternalSemaphoreHandleType

handle
Type:

anon_union6

flags

Flags reserved for the future. Must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC

External semaphore handle descriptor

type

Type of the handle

Type:

CUexternalSemaphoreHandleType

handle
Type:

anon_union6

flags

Flags reserved for the future. Must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS_v1

External semaphore signal parameters

params
Type:

anon_struct15

flags

Only when ::CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS is used to signal a CUexternalSemaphore of type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC, the valid flag is CUDA_EXTERNAL_SEMAPHORE_SIGNAL_SKIP_NVSCIBUF_MEMSYNC which indicates that while signaling the CUexternalSemaphore, no memory synchronization operations should be performed for any external memory object imported as CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF. For all other types of CUexternalSemaphore, flags must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS

External semaphore signal parameters

params
Type:

anon_struct15

flags

Only when ::CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS is used to signal a CUexternalSemaphore of type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC, the valid flag is CUDA_EXTERNAL_SEMAPHORE_SIGNAL_SKIP_NVSCIBUF_MEMSYNC which indicates that while signaling the CUexternalSemaphore, no memory synchronization operations should be performed for any external memory object imported as CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF. For all other types of CUexternalSemaphore, flags must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS_v1

External semaphore wait parameters

params
Type:

anon_struct18

flags

Only when ::CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS is used to wait on a CUexternalSemaphore of type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC, the valid flag is CUDA_EXTERNAL_SEMAPHORE_WAIT_SKIP_NVSCIBUF_MEMSYNC which indicates that while waiting for the CUexternalSemaphore, no memory synchronization operations should be performed for any external memory object imported as CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF. For all other types of CUexternalSemaphore, flags must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS

External semaphore wait parameters

params
Type:

anon_struct18

flags

Only when ::CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS is used to wait on a CUexternalSemaphore of type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC, the valid flag is CUDA_EXTERNAL_SEMAPHORE_WAIT_SKIP_NVSCIBUF_MEMSYNC which indicates that while waiting for the CUexternalSemaphore, no memory synchronization operations should be performed for any external memory object imported as CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF. For all other types of CUexternalSemaphore, flags must be zero.

Type:

unsigned int

reserved
Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXT_SEM_SIGNAL_NODE_PARAMS_v1

Semaphore signal node parameters

extSemArray

Array of external semaphore handles.

Type:

CUexternalSemaphore

paramsArray

Array of external semaphore signal parameters.

Type:

CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS

numExtSems

Number of handles and parameters supplied in extSemArray and paramsArray.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXT_SEM_SIGNAL_NODE_PARAMS

Semaphore signal node parameters

extSemArray

Array of external semaphore handles.

Type:

CUexternalSemaphore

paramsArray

Array of external semaphore signal parameters.

Type:

CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS

numExtSems

Number of handles and parameters supplied in extSemArray and paramsArray.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXT_SEM_SIGNAL_NODE_PARAMS_v2

Semaphore signal node parameters

extSemArray

Array of external semaphore handles.

Type:

CUexternalSemaphore

paramsArray

Array of external semaphore signal parameters.

Type:

CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS

numExtSems

Number of handles and parameters supplied in extSemArray and paramsArray.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXT_SEM_WAIT_NODE_PARAMS_v1

Semaphore wait node parameters

extSemArray

Array of external semaphore handles.

Type:

CUexternalSemaphore

paramsArray

Array of external semaphore wait parameters.

Type:

CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS

numExtSems

Number of handles and parameters supplied in extSemArray and paramsArray.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXT_SEM_WAIT_NODE_PARAMS

Semaphore wait node parameters

extSemArray

Array of external semaphore handles.

Type:

CUexternalSemaphore

paramsArray

Array of external semaphore wait parameters.

Type:

CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS

numExtSems

Number of handles and parameters supplied in extSemArray and paramsArray.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EXT_SEM_WAIT_NODE_PARAMS_v2

Semaphore wait node parameters

extSemArray

Array of external semaphore handles.

Type:

CUexternalSemaphore

paramsArray

Array of external semaphore wait parameters.

Type:

CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS

numExtSems

Number of handles and parameters supplied in extSemArray and paramsArray.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemGenericAllocationHandle_v1
getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemGenericAllocationHandle
getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUarrayMapInfo_v1

Specifies the CUDA array or CUDA mipmapped array memory mapping information

resourceType

Resource type

Type:

CUresourcetype

resource
Type:

anon_union9

subresourceType

Sparse subresource type

Type:

CUarraySparseSubresourceType

subresource
Type:

anon_union10

memOperationType

Memory operation type

Type:

CUmemOperationType

memHandleType

Memory handle type

Type:

CUmemHandleType

memHandle
Type:

anon_union11

offset

Offset within mip tail Offset within the memory

Type:

unsigned long long

deviceBitMask

Device ordinal bit mask

Type:

unsigned int

flags

flags for future use, must be zero now.

Type:

unsigned int

reserved

Reserved for future use, must be zero now.

Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUarrayMapInfo

Specifies the CUDA array or CUDA mipmapped array memory mapping information

resourceType

Resource type

Type:

CUresourcetype

resource
Type:

anon_union9

subresourceType

Sparse subresource type

Type:

CUarraySparseSubresourceType

subresource
Type:

anon_union10

memOperationType

Memory operation type

Type:

CUmemOperationType

memHandleType

Memory handle type

Type:

CUmemHandleType

memHandle
Type:

anon_union11

offset

Offset within mip tail Offset within the memory

Type:

unsigned long long

deviceBitMask

Device ordinal bit mask

Type:

unsigned int

flags

flags for future use, must be zero now.

Type:

unsigned int

reserved

Reserved for future use, must be zero now.

Type:

List[unsigned int]

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemLocation_v1

Specifies a memory location.

type

Specifies the location type, which modifies the meaning of id.

Type:

CUmemLocationType

id

identifier for a given this location’s CUmemLocationType.

Type:

int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemLocation

Specifies a memory location.

type

Specifies the location type, which modifies the meaning of id.

Type:

CUmemLocationType

id

identifier for a given this location’s CUmemLocationType.

Type:

int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemAllocationProp_v1

Specifies the allocation properties for a allocation.

type

Allocation type

Type:

CUmemAllocationType

requestedHandleTypes

requested CUmemAllocationHandleType

Type:

CUmemAllocationHandleType

location

Location of allocation

Type:

CUmemLocation

win32HandleMetaData

Windows-specific POBJECT_ATTRIBUTES required when CU_MEM_HANDLE_TYPE_WIN32 is specified. This object attributes structure includes security attributes that define the scope of which exported allocations may be transferred to other processes. In all other cases, this field is required to be zero.

Type:

Any

allocFlags
Type:

anon_struct21

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemAllocationProp

Specifies the allocation properties for a allocation.

type

Allocation type

Type:

CUmemAllocationType

requestedHandleTypes

requested CUmemAllocationHandleType

Type:

CUmemAllocationHandleType

location

Location of allocation

Type:

CUmemLocation

win32HandleMetaData

Windows-specific POBJECT_ATTRIBUTES required when CU_MEM_HANDLE_TYPE_WIN32 is specified. This object attributes structure includes security attributes that define the scope of which exported allocations may be transferred to other processes. In all other cases, this field is required to be zero.

Type:

Any

allocFlags
Type:

anon_struct21

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmulticastObjectProp_v1

Specifies the properties for a multicast object.

numDevices

The number of devices in the multicast team that will bind memory to this object

Type:

unsigned int

size

The maximum amount of memory that can be bound to this multicast object per device

Type:

size_t

handleTypes

Bitmask of exportable handle types (see CUmemAllocationHandleType) for this object

Type:

unsigned long long

flags

Flags for future use, must be zero now

Type:

unsigned long long

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmulticastObjectProp

Specifies the properties for a multicast object.

numDevices

The number of devices in the multicast team that will bind memory to this object

Type:

unsigned int

size

The maximum amount of memory that can be bound to this multicast object per device

Type:

size_t

handleTypes

Bitmask of exportable handle types (see CUmemAllocationHandleType) for this object

Type:

unsigned long long

flags

Flags for future use, must be zero now

Type:

unsigned long long

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemAccessDesc_v1

Memory access descriptor

location

Location on which the request is to change it’s accessibility

Type:

CUmemLocation

flags

::CUmemProt accessibility flags to set on the request

Type:

CUmemAccess_flags

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemAccessDesc

Memory access descriptor

location

Location on which the request is to change it’s accessibility

Type:

CUmemLocation

flags

::CUmemProt accessibility flags to set on the request

Type:

CUmemAccess_flags

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphExecUpdateResultInfo_v1

Result information returned by cuGraphExecUpdate

result

Gives more specific detail when a cuda graph update fails.

Type:

CUgraphExecUpdateResult

errorNode

The “to node” of the error edge when the topologies do not match. The error node when the error is associated with a specific node. NULL when the error is generic.

Type:

CUgraphNode

errorFromNode

The from node of error edge when the topologies do not match. Otherwise NULL.

Type:

CUgraphNode

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphExecUpdateResultInfo

Result information returned by cuGraphExecUpdate

result

Gives more specific detail when a cuda graph update fails.

Type:

CUgraphExecUpdateResult

errorNode

The “to node” of the error edge when the topologies do not match. The error node when the error is associated with a specific node. NULL when the error is generic.

Type:

CUgraphNode

errorFromNode

The from node of error edge when the topologies do not match. Otherwise NULL.

Type:

CUgraphNode

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemPoolProps_v1

Specifies the properties of allocations made from the pool.

allocType

Allocation type. Currently must be specified as CU_MEM_ALLOCATION_TYPE_PINNED

Type:

CUmemAllocationType

handleTypes

Handle types that will be supported by allocations from the pool.

Type:

CUmemAllocationHandleType

location

Location where allocations should reside.

Type:

CUmemLocation

win32SecurityAttributes

Windows-specific LPSECURITYATTRIBUTES required when CU_MEM_HANDLE_TYPE_WIN32 is specified. This security attribute defines the scope of which exported allocations may be transferred to other processes. In all other cases, this field is required to be zero.

Type:

Any

maxSize

Maximum pool size. When set to 0, defaults to a system dependent value.

Type:

size_t

usage

Bitmask indicating intended usage for the pool.

Type:

unsigned short

reserved

reserved for future use, must be 0

Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemPoolProps

Specifies the properties of allocations made from the pool.

allocType

Allocation type. Currently must be specified as CU_MEM_ALLOCATION_TYPE_PINNED

Type:

CUmemAllocationType

handleTypes

Handle types that will be supported by allocations from the pool.

Type:

CUmemAllocationHandleType

location

Location where allocations should reside.

Type:

CUmemLocation

win32SecurityAttributes

Windows-specific LPSECURITYATTRIBUTES required when CU_MEM_HANDLE_TYPE_WIN32 is specified. This security attribute defines the scope of which exported allocations may be transferred to other processes. In all other cases, this field is required to be zero.

Type:

Any

maxSize

Maximum pool size. When set to 0, defaults to a system dependent value.

Type:

size_t

usage

Bitmask indicating intended usage for the pool.

Type:

unsigned short

reserved

reserved for future use, must be 0

Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemPoolPtrExportData_v1

Opaque data for exporting a pool allocation

reserved
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUmemPoolPtrExportData

Opaque data for exporting a pool allocation

reserved
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEM_ALLOC_NODE_PARAMS_v1

Memory allocation node parameters

poolProps

in: location where the allocation should reside (specified in ::location). ::handleTypes must be CU_MEM_HANDLE_TYPE_NONE. IPC is not supported.

Type:

CUmemPoolProps

accessDescs

in: array of memory access descriptors. Used to describe peer GPU access

Type:

CUmemAccessDesc

accessDescCount

in: number of memory access descriptors. Must not exceed the number of GPUs.

Type:

size_t

bytesize

in: size in bytes of the requested allocation

Type:

size_t

dptr

out: address of the allocation returned by CUDA

Type:

CUdeviceptr

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEM_ALLOC_NODE_PARAMS

Memory allocation node parameters

poolProps

in: location where the allocation should reside (specified in ::location). ::handleTypes must be CU_MEM_HANDLE_TYPE_NONE. IPC is not supported.

Type:

CUmemPoolProps

accessDescs

in: array of memory access descriptors. Used to describe peer GPU access

Type:

CUmemAccessDesc

accessDescCount

in: number of memory access descriptors. Must not exceed the number of GPUs.

Type:

size_t

bytesize

in: size in bytes of the requested allocation

Type:

size_t

dptr

out: address of the allocation returned by CUDA

Type:

CUdeviceptr

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEM_ALLOC_NODE_PARAMS_v2

Memory allocation node parameters

poolProps

in: location where the allocation should reside (specified in ::location). ::handleTypes must be CU_MEM_HANDLE_TYPE_NONE. IPC is not supported.

Type:

CUmemPoolProps

accessDescs

in: array of memory access descriptors. Used to describe peer GPU access

Type:

CUmemAccessDesc

accessDescCount

in: number of memory access descriptors. Must not exceed the number of GPUs.

Type:

size_t

bytesize

in: size in bytes of the requested allocation

Type:

size_t

dptr

out: address of the allocation returned by CUDA

Type:

CUdeviceptr

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_MEM_FREE_NODE_PARAMS

Memory free node parameters

dptr

in: the pointer to free

Type:

CUdeviceptr

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_CHILD_GRAPH_NODE_PARAMS

Child graph node parameters

graph

The child graph to clone into the node for node creation, or a handle to the graph owned by the node for node query

Type:

CUgraph

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EVENT_RECORD_NODE_PARAMS

Event record node parameters

event

The event to record when the node executes

Type:

CUevent

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUDA_EVENT_WAIT_NODE_PARAMS

Event wait node parameters

event

The event to wait on from the node

Type:

CUevent

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgraphNodeParams

Graph node parameters. See cuGraphAddNode.

type

Type of the node

Type:

CUgraphNodeType

reserved0

Reserved. Must be zero.

Type:

List[int]

reserved1

Padding. Unused bytes must be zero.

Type:

List[long long]

kernel

Kernel node parameters.

Type:

CUDA_KERNEL_NODE_PARAMS_v3

memcpy

Memcpy node parameters.

Type:

CUDA_MEMCPY_NODE_PARAMS

memset

Memset node parameters.

Type:

CUDA_MEMSET_NODE_PARAMS_v2

host

Host node parameters.

Type:

CUDA_HOST_NODE_PARAMS_v2

graph

Child graph node parameters.

Type:

CUDA_CHILD_GRAPH_NODE_PARAMS

eventWait

Event wait node parameters.

Type:

CUDA_EVENT_WAIT_NODE_PARAMS

eventRecord

Event record node parameters.

Type:

CUDA_EVENT_RECORD_NODE_PARAMS

extSemSignal

External semaphore signal node parameters.

Type:

CUDA_EXT_SEM_SIGNAL_NODE_PARAMS_v2

extSemWait

External semaphore wait node parameters.

Type:

CUDA_EXT_SEM_WAIT_NODE_PARAMS_v2

alloc

Memory allocation node parameters.

Type:

CUDA_MEM_ALLOC_NODE_PARAMS_v2

free

Memory free node parameters.

Type:

CUDA_MEM_FREE_NODE_PARAMS

memOp

MemOp node parameters.

Type:

CUDA_BATCH_MEM_OP_NODE_PARAMS_v2

conditional

Conditional node parameters.

Type:

CUDA_CONDITIONAL_NODE_PARAMS

reserved2

Reserved bytes. Must be zero.

Type:

long long

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUeglFrame_v1

CUDA EGLFrame structure Descriptor - structure defining one frame of EGL. Each frame may contain one or more planes depending on whether the surface * is Multiplanar or not.

frame
Type:

anon_union14

width

Width of first plane

Type:

unsigned int

height

Height of first plane

Type:

unsigned int

depth

Depth of first plane

Type:

unsigned int

pitch

Pitch of first plane

Type:

unsigned int

planeCount

Number of planes

Type:

unsigned int

numChannels

Number of channels for the plane

Type:

unsigned int

frameType

Array or Pitch

Type:

CUeglFrameType

eglColorFormat

CUDA EGL Color Format

Type:

CUeglColorFormat

cuFormat

CUDA Array Format

Type:

CUarray_format

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUeglFrame

CUDA EGLFrame structure Descriptor - structure defining one frame of EGL. Each frame may contain one or more planes depending on whether the surface * is Multiplanar or not.

frame
Type:

anon_union14

width

Width of first plane

Type:

unsigned int

height

Height of first plane

Type:

unsigned int

depth

Depth of first plane

Type:

unsigned int

pitch

Pitch of first plane

Type:

unsigned int

planeCount

Number of planes

Type:

unsigned int

numChannels

Number of channels for the plane

Type:

unsigned int

frameType

Array or Pitch

Type:

CUeglFrameType

eglColorFormat

CUDA EGL Color Format

Type:

CUeglColorFormat

cuFormat

CUDA Array Format

Type:

CUarray_format

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUeglStreamConnection(*args, **kwargs)

CUDA EGLSream Connection

getPtr()

Get memory address of class instance

driver.CUDA_VERSION = 12060

CUDA API version number

driver.CU_IPC_HANDLE_SIZE = 64

CUDA IPC handle size

driver.CU_STREAM_LEGACY = 1

Legacy stream handle

Stream handle that can be passed as a CUstream to use an implicit stream with legacy synchronization behavior.

See details of the link_sync_behavior

driver.CU_STREAM_PER_THREAD = 2

Per-thread stream handle

Stream handle that can be passed as a CUstream to use an implicit stream with per-thread synchronization behavior.

See details of the link_sync_behavior

driver.CU_COMPUTE_ACCELERATED_TARGET_BASE = 65536
driver.CU_GRAPH_COND_ASSIGN_DEFAULT = 1

Conditional node handle flags Default value is applied when graph is launched.

driver.CU_GRAPH_KERNEL_NODE_PORT_DEFAULT = 0

This port activates when the kernel has finished executing.

driver.CU_GRAPH_KERNEL_NODE_PORT_PROGRAMMATIC = 1

This port activates when all blocks of the kernel have performed cudaTriggerProgrammaticLaunchCompletion() or have terminated. It must be used with edge type CU_GRAPH_DEPENDENCY_TYPE_PROGRAMMATIC. See also CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_EVENT.

driver.CU_GRAPH_KERNEL_NODE_PORT_LAUNCH_ORDER = 2

This port activates when all blocks of the kernel have begun execution. See also CU_LAUNCH_ATTRIBUTE_LAUNCH_COMPLETION_EVENT.

driver.CU_KERNEL_NODE_ATTRIBUTE_ACCESS_POLICY_WINDOW = 1
driver.CU_KERNEL_NODE_ATTRIBUTE_COOPERATIVE = 2
driver.CU_KERNEL_NODE_ATTRIBUTE_CLUSTER_DIMENSION = 4
driver.CU_KERNEL_NODE_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE = 5
driver.CU_KERNEL_NODE_ATTRIBUTE_PRIORITY = 8
driver.CU_KERNEL_NODE_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP = 9
driver.CU_KERNEL_NODE_ATTRIBUTE_MEM_SYNC_DOMAIN = 10
driver.CU_KERNEL_NODE_ATTRIBUTE_DEVICE_UPDATABLE_KERNEL_NODE = 13
driver.CU_KERNEL_NODE_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT = 14
driver.CU_STREAM_ATTRIBUTE_ACCESS_POLICY_WINDOW = 1
driver.CU_STREAM_ATTRIBUTE_SYNCHRONIZATION_POLICY = 3
driver.CU_STREAM_ATTRIBUTE_PRIORITY = 8
driver.CU_STREAM_ATTRIBUTE_MEM_SYNC_DOMAIN_MAP = 9
driver.CU_STREAM_ATTRIBUTE_MEM_SYNC_DOMAIN = 10
driver.CU_MEMHOSTALLOC_PORTABLE = 1

If set, host memory is portable between CUDA contexts. Flag for cuMemHostAlloc()

driver.CU_MEMHOSTALLOC_DEVICEMAP = 2

If set, host memory is mapped into CUDA address space and cuMemHostGetDevicePointer() may be called on the host pointer. Flag for cuMemHostAlloc()

driver.CU_MEMHOSTALLOC_WRITECOMBINED = 4

If set, host memory is allocated as write-combined - fast to write, faster to DMA, slow to read except via SSE4 streaming load instruction (MOVNTDQA). Flag for cuMemHostAlloc()

driver.CU_MEMHOSTREGISTER_PORTABLE = 1

If set, host memory is portable between CUDA contexts. Flag for cuMemHostRegister()

driver.CU_MEMHOSTREGISTER_DEVICEMAP = 2

If set, host memory is mapped into CUDA address space and cuMemHostGetDevicePointer() may be called on the host pointer. Flag for cuMemHostRegister()

driver.CU_MEMHOSTREGISTER_IOMEMORY = 4

If set, the passed memory pointer is treated as pointing to some memory-mapped I/O space, e.g. belonging to a third-party PCIe device. On Windows the flag is a no-op. On Linux that memory is marked as non cache-coherent for the GPU and is expected to be physically contiguous. It may return CUDA_ERROR_NOT_PERMITTED if run as an unprivileged user, CUDA_ERROR_NOT_SUPPORTED on older Linux kernel versions. On all other platforms, it is not supported and CUDA_ERROR_NOT_SUPPORTED is returned. Flag for cuMemHostRegister()

driver.CU_MEMHOSTREGISTER_READ_ONLY = 8

If set, the passed memory pointer is treated as pointing to memory that is considered read-only by the device. On platforms without CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES, this flag is required in order to register memory mapped to the CPU as read-only. Support for the use of this flag can be queried from the device attribute CU_DEVICE_ATTRIBUTE_READ_ONLY_HOST_REGISTER_SUPPORTED. Using this flag with a current context associated with a device that does not have this attribute set will cause cuMemHostRegister to error with CUDA_ERROR_NOT_SUPPORTED.

driver.CU_ARRAY_SPARSE_PROPERTIES_SINGLE_MIPTAIL = 1

Indicates that the layered sparse CUDA array or CUDA mipmapped array has a single mip tail region for all layers

driver.CU_TENSOR_MAP_NUM_QWORDS = 16

Size of tensor map descriptor

driver.CUDA_EXTERNAL_MEMORY_DEDICATED = 1

Indicates that the external memory object is a dedicated resource

driver.CUDA_EXTERNAL_SEMAPHORE_SIGNAL_SKIP_NVSCIBUF_MEMSYNC = 1

When the flags parameter of CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS contains this flag, it indicates that signaling an external semaphore object should skip performing appropriate memory synchronization operations over all the external memory objects that are imported as CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF, which otherwise are performed by default to ensure data coherency with other importers of the same NvSciBuf memory objects.

driver.CUDA_EXTERNAL_SEMAPHORE_WAIT_SKIP_NVSCIBUF_MEMSYNC = 2

When the flags parameter of CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS contains this flag, it indicates that waiting on an external semaphore object should skip performing appropriate memory synchronization operations over all the external memory objects that are imported as CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF, which otherwise are performed by default to ensure data coherency with other importers of the same NvSciBuf memory objects.

driver.CUDA_NVSCISYNC_ATTR_SIGNAL = 1

When flags of cuDeviceGetNvSciSyncAttributes is set to this, it indicates that application needs signaler specific NvSciSyncAttr to be filled by cuDeviceGetNvSciSyncAttributes.

driver.CUDA_NVSCISYNC_ATTR_WAIT = 2

When flags of cuDeviceGetNvSciSyncAttributes is set to this, it indicates that application needs waiter specific NvSciSyncAttr to be filled by cuDeviceGetNvSciSyncAttributes.

driver.CU_MEM_CREATE_USAGE_TILE_POOL = 1

This flag if set indicates that the memory will be used as a tile pool.

driver.CUDA_COOPERATIVE_LAUNCH_MULTI_DEVICE_NO_PRE_LAUNCH_SYNC = 1

If set, each kernel launched as part of cuLaunchCooperativeKernelMultiDevice only waits for prior work in the stream corresponding to that GPU to complete before the kernel begins execution.

driver.CUDA_COOPERATIVE_LAUNCH_MULTI_DEVICE_NO_POST_LAUNCH_SYNC = 2

If set, any subsequent work pushed in a stream that participated in a call to cuLaunchCooperativeKernelMultiDevice will only wait for the kernel launched on the GPU corresponding to that stream to complete before it begins execution.

driver.CUDA_ARRAY3D_LAYERED = 1

If set, the CUDA array is a collection of layers, where each layer is either a 1D or a 2D array and the Depth member of CUDA_ARRAY3D_DESCRIPTOR specifies the number of layers, not the depth of a 3D array.

driver.CUDA_ARRAY3D_2DARRAY = 1

Deprecated, use CUDA_ARRAY3D_LAYERED

driver.CUDA_ARRAY3D_SURFACE_LDST = 2

This flag must be set in order to bind a surface reference to the CUDA array

driver.CUDA_ARRAY3D_CUBEMAP = 4

If set, the CUDA array is a collection of six 2D arrays, representing faces of a cube. The width of such a CUDA array must be equal to its height, and Depth must be six. If CUDA_ARRAY3D_LAYERED flag is also set, then the CUDA array is a collection of cubemaps and Depth must be a multiple of six.

driver.CUDA_ARRAY3D_TEXTURE_GATHER = 8

This flag must be set in order to perform texture gather operations on a CUDA array.

driver.CUDA_ARRAY3D_DEPTH_TEXTURE = 16

This flag if set indicates that the CUDA array is a DEPTH_TEXTURE.

driver.CUDA_ARRAY3D_COLOR_ATTACHMENT = 32

This flag indicates that the CUDA array may be bound as a color target in an external graphics API

driver.CUDA_ARRAY3D_SPARSE = 64

This flag if set indicates that the CUDA array or CUDA mipmapped array is a sparse CUDA array or CUDA mipmapped array respectively

driver.CUDA_ARRAY3D_DEFERRED_MAPPING = 128

This flag if set indicates that the CUDA array or CUDA mipmapped array will allow deferred memory mapping

driver.CUDA_ARRAY3D_VIDEO_ENCODE_DECODE = 256

This flag indicates that the CUDA array will be used for hardware accelerated video encode/decode operations.

driver.CU_TRSA_OVERRIDE_FORMAT = 1

Override the texref format with a format inferred from the array. Flag for cuTexRefSetArray()

driver.CU_TRSF_READ_AS_INTEGER = 1

Read the texture as integers rather than promoting the values to floats in the range [0,1]. Flag for cuTexRefSetFlags() and cuTexObjectCreate()

driver.CU_TRSF_NORMALIZED_COORDINATES = 2

Use normalized texture coordinates in the range [0,1) instead of [0,dim). Flag for cuTexRefSetFlags() and cuTexObjectCreate()

driver.CU_TRSF_SRGB = 16

Perform sRGB->linear conversion during texture read. Flag for cuTexRefSetFlags() and cuTexObjectCreate()

driver.CU_TRSF_DISABLE_TRILINEAR_OPTIMIZATION = 32

Disable any trilinear filtering optimizations. Flag for cuTexRefSetFlags() and cuTexObjectCreate()

driver.CU_TRSF_SEAMLESS_CUBEMAP = 64

Enable seamless cube map filtering. Flag for cuTexObjectCreate()

driver.CU_LAUNCH_PARAM_END_AS_INT = 0

C++ compile time constant for CU_LAUNCH_PARAM_END

driver.CU_LAUNCH_PARAM_END = 0

End of array terminator for the extra parameter to cuLaunchKernel

driver.CU_LAUNCH_PARAM_BUFFER_POINTER_AS_INT = 1

C++ compile time constant for CU_LAUNCH_PARAM_BUFFER_POINTER

driver.CU_LAUNCH_PARAM_BUFFER_POINTER = 1

Indicator that the next value in the extra parameter to cuLaunchKernel will be a pointer to a buffer containing all kernel parameters used for launching kernel f. This buffer needs to honor all alignment/padding requirements of the individual parameters. If CU_LAUNCH_PARAM_BUFFER_SIZE is not also specified in the extra array, then CU_LAUNCH_PARAM_BUFFER_POINTER will have no effect.

driver.CU_LAUNCH_PARAM_BUFFER_SIZE_AS_INT = 2

C++ compile time constant for CU_LAUNCH_PARAM_BUFFER_SIZE

driver.CU_LAUNCH_PARAM_BUFFER_SIZE = 2

Indicator that the next value in the extra parameter to cuLaunchKernel will be a pointer to a size_t which contains the size of the buffer specified with CU_LAUNCH_PARAM_BUFFER_POINTER. It is required that CU_LAUNCH_PARAM_BUFFER_POINTER also be specified in the extra array if the value associated with CU_LAUNCH_PARAM_BUFFER_SIZE is not zero.

driver.CU_PARAM_TR_DEFAULT = -1

For texture references loaded into the module, use default texunit from texture reference.

driver.CU_DEVICE_CPU = -1

Device that represents the CPU

driver.CU_DEVICE_INVALID = -2

Device that represents an invalid device

driver.MAX_PLANES = 3

Maximum number of planes per frame

driver.CUDA_EGL_INFINITE_TIMEOUT = -1

Indicates that timeout for cuEGLStreamConsumerAcquireFrame is infinite.

Error Handling

This section describes the error handling functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuGetErrorString(error: CUresult)

Gets the string description of an error code.

Sets *pStr to the address of a NULL-terminated string description of the error code error. If the error code is not recognized, CUDA_ERROR_INVALID_VALUE will be returned and *pStr will be set to the NULL address.

Parameters:

error (CUresult) – Error code to convert to string

Returns:

cuda.bindings.driver.cuGetErrorName(error: CUresult)

Gets the string representation of an error code enum name.

Sets *pStr to the address of a NULL-terminated string representation of the name of the enum error code error. If the error code is not recognized, CUDA_ERROR_INVALID_VALUE will be returned and *pStr will be set to the NULL address.

Parameters:

error (CUresult) – Error code to convert to string

Returns:

Initialization

This section describes the initialization functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuInit(unsigned int Flags)

Initialize the CUDA driver API Initializes the driver API and must be called before any other function from the driver API in the current process. Currently, the Flags parameter must be 0. If cuInit() has not been called, any function from the driver API will return CUDA_ERROR_NOT_INITIALIZED.

Parameters:

Flags (unsigned int) – Initialization flag for CUDA.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_SYSTEM_DRIVER_MISMATCH, CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE

Return type:

CUresult

Version Management

This section describes the version management functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuDriverGetVersion()

Returns the latest CUDA version supported by driver.

Returns in *driverVersion the version of CUDA supported by the driver. The version is returned as (1000 * major + 10 * minor). For example, CUDA 9.2 would be represented by 9020.

This function automatically returns CUDA_ERROR_INVALID_VALUE if driverVersion is NULL.

Returns:

Device Management

This section describes the device management functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuDeviceGet(int ordinal)

Returns a handle to a compute device.

Returns in *device a device handle given an ordinal in the range [0, cuDeviceGetCount()-1].

Parameters:

ordinal (int) – Device number to get handle for

Returns:

cuda.bindings.driver.cuDeviceGetCount()

Returns the number of compute-capable devices.

Returns in *count the number of devices with compute capability greater than or equal to 2.0 that are available for execution. If there is no such device, cuDeviceGetCount() returns 0.

Returns:

cuda.bindings.driver.cuDeviceGetName(int length, dev)

Returns an identifier string for the device.

Returns an ASCII string identifying the device dev in the NULL- terminated string pointed to by name. length specifies the maximum length of the string that may be returned.

Parameters:
  • length (int) – Maximum length of string to store in name

  • dev (CUdevice) – Device to get identifier string for

Returns:

cuda.bindings.driver.cuDeviceGetUuid(dev)

Return an UUID for the device.

Note there is a later version of this API, cuDeviceGetUuid_v2. It will supplant this version in 12.0, which is retained for minor version compatibility.

Returns 16-octets identifying the device dev in the structure pointed by the uuid.

Parameters:

dev (CUdevice) – Device to get identifier string for

Returns:

cuda.bindings.driver.cuDeviceGetUuid_v2(dev)

Return an UUID for the device (11.4+)

Returns 16-octets identifying the device dev in the structure pointed by the uuid. If the device is in MIG mode, returns its MIG UUID which uniquely identifies the subscribed MIG compute instance.

Parameters:

dev (CUdevice) – Device to get identifier string for

Returns:

cuda.bindings.driver.cuDeviceGetLuid(dev)

Return an LUID and device node mask for the device.

Return identifying information (luid and deviceNodeMask) to allow matching device with graphics APIs.

Parameters:

dev (CUdevice) – Device to get identifier string for

Returns:

cuda.bindings.driver.cuDeviceTotalMem(dev)

Returns the total amount of memory on the device.

Returns in *bytes the total amount of memory available on the device dev in bytes.

Parameters:

dev (CUdevice) – Device handle

Returns:

cuda.bindings.driver.cuDeviceGetTexture1DLinearMaxWidth(pformat: CUarray_format, unsigned int numChannels, dev)

Returns the maximum number of elements allocatable in a 1D linear texture for a given texture element size.

Returns in maxWidthInElements the maximum number of texture elements allocatable in a 1D linear texture for given pformat and numChannels.

Parameters:
  • pformat (CUarray_format) – Texture format.

  • numChannels (unsigned) – Number of channels per texture element.

  • dev (CUdevice) – Device handle.

Returns:

cuda.bindings.driver.cuDeviceGetAttribute(attrib: CUdevice_attribute, dev)

Returns information about the device.

Returns in *pi the integer value of the attribute attrib on device dev. The supported attributes are:

Parameters:
Returns:

cuda.bindings.driver.cuDeviceGetNvSciSyncAttributes(nvSciSyncAttrList, dev, int flags)

Return NvSciSync attributes that this device can support.

Returns in nvSciSyncAttrList, the properties of NvSciSync that this CUDA device, dev can support. The returned nvSciSyncAttrList can be used to create an NvSciSync object that matches this device’s capabilities.

If NvSciSyncAttrKey_RequiredPerm field in nvSciSyncAttrList is already set this API will return CUDA_ERROR_INVALID_VALUE.

The applications should set nvSciSyncAttrList to a valid NvSciSyncAttrList failing which this API will return CUDA_ERROR_INVALID_HANDLE.

The flags controls how applications intends to use the NvSciSync created from the nvSciSyncAttrList. The valid flags are:

At least one of these flags must be set, failing which the API returns CUDA_ERROR_INVALID_VALUE. Both the flags are orthogonal to one another: a developer may set both these flags that allows to set both wait and signal specific attributes in the same nvSciSyncAttrList.

Note that this API updates the input nvSciSyncAttrList with values equivalent to the following public attribute key-values: NvSciSyncAttrKey_RequiredPerm is set to

  • NvSciSyncAccessPerm_SignalOnly if CUDA_NVSCISYNC_ATTR_SIGNAL is set in flags.

  • NvSciSyncAccessPerm_WaitOnly if CUDA_NVSCISYNC_ATTR_WAIT is set in flags.

  • NvSciSyncAccessPerm_WaitSignal if both CUDA_NVSCISYNC_ATTR_WAIT and CUDA_NVSCISYNC_ATTR_SIGNAL are set in flags. NvSciSyncAttrKey_PrimitiveInfo is set to

  • NvSciSyncAttrValPrimitiveType_SysmemSemaphore on any valid device.

  • NvSciSyncAttrValPrimitiveType_Syncpoint if device is a Tegra device.

  • NvSciSyncAttrValPrimitiveType_SysmemSemaphorePayload64b if device is GA10X+. NvSciSyncAttrKey_GpuId is set to the same UUID that is returned for this device from cuDeviceGetUuid.

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_NOT_SUPPORTED, CUDA_ERROR_OUT_OF_MEMORY

Parameters:
  • nvSciSyncAttrList (Any) – Return NvSciSync attributes supported.

  • dev (CUdevice) – Valid Cuda Device to get NvSciSync attributes for.

  • flags (int) – flags describing NvSciSync usage.

Return type:

CUresult

cuda.bindings.driver.cuDeviceSetMemPool(dev, pool)

Sets the current memory pool of a device.

The memory pool must be local to the specified device. cuMemAllocAsync allocates from the current mempool of the provided stream’s device. By default, a device’s current memory pool is its default memory pool.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

Notes

Use cuMemAllocFromPoolAsync to specify asynchronous allocations from a device different than the one the stream runs on.

cuda.bindings.driver.cuDeviceGetMemPool(dev)

Gets the current mempool for a device.

Returns the last pool provided to cuDeviceSetMemPool for this device or the device’s default memory pool if cuDeviceSetMemPool has never been called. By default the current mempool is the default mempool for a device. Otherwise the returned pool must have been set with cuDeviceSetMemPool.

Parameters:

dev (CUdevice) – None

Returns:

cuda.bindings.driver.cuDeviceGetDefaultMemPool(dev)

Returns the default mempool of a device.

The default mempool of a device contains device memory from that device.

Parameters:

dev (CUdevice) – None

Returns:

cuda.bindings.driver.cuDeviceGetExecAffinitySupport(typename: CUexecAffinityType, dev)

Returns information about the execution affinity support of the device.

Returns in *pi whether execution affinity type typename is supported by device dev. The supported types are:

Parameters:
Returns:

cuda.bindings.driver.cuFlushGPUDirectRDMAWrites(target: CUflushGPUDirectRDMAWritesTarget, scope: CUflushGPUDirectRDMAWritesScope)

Blocks until remote writes are visible to the specified scope.

Blocks until GPUDirect RDMA writes to the target context via mappings created through APIs like nvidia_p2p_get_pages (see https://docs.nvidia.com/cuda/gpudirect-rdma for more information), are visible to the specified scope.

If the scope equals or lies within the scope indicated by CU_DEVICE_ATTRIBUTE_GPU_DIRECT_RDMA_WRITES_ORDERING, the call will be a no-op and can be safely omitted for performance. This can be determined by comparing the numerical values between the two enums, with smaller scopes having smaller values.

Users may query support for this API via CU_DEVICE_ATTRIBUTE_FLUSH_FLUSH_GPU_DIRECT_RDMA_OPTIONS.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

Primary Context Management

This section describes the primary context management functions of the low-level CUDA driver application programming interface.

The primary context is unique per device and shared with the CUDA runtime API. These functions allow integration with other libraries using CUDA.

cuda.bindings.driver.cuDevicePrimaryCtxRetain(dev)

Retain the primary context on the GPU.

Retains the primary context on the device. Once the user successfully retains the primary context, the primary context will be active and available to the user until the user releases it with cuDevicePrimaryCtxRelease() or resets it with cuDevicePrimaryCtxReset(). Unlike cuCtxCreate() the newly retained context is not pushed onto the stack.

Retaining the primary context for the first time will fail with CUDA_ERROR_UNKNOWN if the compute mode of the device is CU_COMPUTEMODE_PROHIBITED. The function cuDeviceGetAttribute() can be used with CU_DEVICE_ATTRIBUTE_COMPUTE_MODE to determine the compute mode of the device. The nvidia-smi tool can be used to set the compute mode for devices. Documentation for nvidia-smi can be obtained by passing a -h option to it.

Please note that the primary context always supports pinned allocations. Other flags can be specified by cuDevicePrimaryCtxSetFlags().

Parameters:

dev (CUdevice) – Device for which primary context is requested

Returns:

cuda.bindings.driver.cuDevicePrimaryCtxRelease(dev)

Release the primary context on the GPU.

Releases the primary context interop on the device. A retained context should always be released once the user is done using it. The context is automatically reset once the last reference to it is released. This behavior is different when the primary context was retained by the CUDA runtime from CUDA 4.0 and earlier. In this case, the primary context remains always active.

Releasing a primary context that has not been previously retained will fail with CUDA_ERROR_INVALID_CONTEXT.

Please note that unlike cuCtxDestroy() this method does not pop the context from stack in any circumstances.

Parameters:

dev (CUdevice) – Device which primary context is released

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_INVALID_CONTEXT

Return type:

CUresult

cuda.bindings.driver.cuDevicePrimaryCtxSetFlags(dev, unsigned int flags)

Set flags for the primary context.

Sets the flags for the primary context on the device overwriting perviously set ones.

The three LSBs of the flags parameter can be used to control how the OS thread, which owns the CUDA context at the time of an API call, interacts with the OS scheduler when waiting for results from the GPU. Only one of the scheduling flags can be set when creating a context.

  • CU_CTX_SCHED_SPIN: Instruct CUDA to actively spin when waiting for results from the GPU. This can decrease latency when waiting for the GPU, but may lower the performance of CPU threads if they are performing work in parallel with the CUDA thread.

  • CU_CTX_SCHED_YIELD: Instruct CUDA to yield its thread when waiting for results from the GPU. This can increase latency when waiting for the GPU, but can increase the performance of CPU threads performing work in parallel with the GPU.

  • CU_CTX_SCHED_BLOCKING_SYNC: Instruct CUDA to block the CPU thread on a synchronization primitive when waiting for the GPU to finish work.

  • CU_CTX_BLOCKING_SYNC: Instruct CUDA to block the CPU thread on a synchronization primitive when waiting for the GPU to finish work. Deprecated: This flag was deprecated as of CUDA 4.0 and was replaced with CU_CTX_SCHED_BLOCKING_SYNC.

  • CU_CTX_SCHED_AUTO: The default value if the flags parameter is zero, uses a heuristic based on the number of active CUDA contexts in the process C and the number of logical processors in the system P. If C > P, then CUDA will yield to other OS threads when waiting for the GPU (CU_CTX_SCHED_YIELD), otherwise CUDA will not yield while waiting for results and actively spin on the processor (CU_CTX_SCHED_SPIN). Additionally, on Tegra devices, CU_CTX_SCHED_AUTO uses a heuristic based on the power profile of the platform and may choose CU_CTX_SCHED_BLOCKING_SYNC for low-powered devices.

  • CU_CTX_LMEM_RESIZE_TO_MAX: Instruct CUDA to not reduce local memory after resizing local memory for a kernel. This can prevent thrashing by local memory allocations when launching many kernels with high local memory usage at the cost of potentially increased memory usage. Deprecated: This flag is deprecated and the behavior enabled by this flag is now the default and cannot be disabled.

  • CU_CTX_COREDUMP_ENABLE: If GPU coredumps have not been enabled globally with cuCoredumpSetAttributeGlobal or environment variables, this flag can be set during context creation to instruct CUDA to create a coredump if this context raises an exception during execution. These environment variables are described in the CUDA-GDB user guide under the “GPU core dump support” section. The initial settings will be taken from the global settings at the time of context creation. The other settings that control coredump output can be modified by calling cuCoredumpSetAttribute from the created context after it becomes current.

  • CU_CTX_USER_COREDUMP_ENABLE: If user-triggered GPU coredumps have not been enabled globally with cuCoredumpSetAttributeGlobal or environment variables, this flag can be set during context creation to instruct CUDA to create a coredump if data is written to a certain pipe that is present in the OS space. These environment variables are described in the CUDA-GDB user guide under the “GPU core dump support” section. It is important to note that the pipe name must be set with cuCoredumpSetAttributeGlobal before creating the context if this flag is used. Setting this flag implies that CU_CTX_COREDUMP_ENABLE is set. The initial settings will be taken from the global settings at the time of context creation. The other settings that control coredump output can be modified by calling cuCoredumpSetAttribute from the created context after it becomes current.

  • CU_CTX_SYNC_MEMOPS: Ensures that synchronous memory operations initiated on this context will always synchronize. See further documentation in the section titled “API Synchronization behavior” to learn more about cases when synchronous memory operations can exhibit asynchronous behavior.

Parameters:
  • dev (CUdevice) – Device for which the primary context flags are set

  • flags (unsigned int) – New flags for the device

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuDevicePrimaryCtxGetState(dev)

Get the state of the primary context.

Returns in *flags the flags for the primary context of dev, and in *active whether it is active. See cuDevicePrimaryCtxSetFlags for flag values.

Parameters:

dev (CUdevice) – Device to get primary context flags for

Returns:

cuda.bindings.driver.cuDevicePrimaryCtxReset(dev)

Destroy all allocations and reset all state on the primary context.

Explicitly destroys and cleans up all resources associated with the current device in the current process.

Note that it is responsibility of the calling function to ensure that no other module in the process is using the device any more. For that reason it is recommended to use cuDevicePrimaryCtxRelease() in most cases. However it is safe for other modules to call cuDevicePrimaryCtxRelease() even after resetting the device. Resetting the primary context does not release it, an application that has retained the primary context should explicitly release its usage.

Parameters:

dev (CUdevice) – Device for which primary context is destroyed

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_PRIMARY_CONTEXT_ACTIVE

Return type:

CUresult

Context Management

This section describes the context management functions of the low-level CUDA driver application programming interface.

Please note that some functions are described in Primary Context Management section.

cuda.bindings.driver.cuCtxCreate(unsigned int flags, dev)

Create a CUDA context.

Creates a new CUDA context and associates it with the calling thread. The flags parameter is described below. The context is created with a usage count of 1 and the caller of cuCtxCreate() must call cuCtxDestroy() when done using the context. If a context is already current to the thread, it is supplanted by the newly created context and may be restored by a subsequent call to cuCtxPopCurrent().

The three LSBs of the flags parameter can be used to control how the OS thread, which owns the CUDA context at the time of an API call, interacts with the OS scheduler when waiting for results from the GPU. Only one of the scheduling flags can be set when creating a context.

  • CU_CTX_SCHED_SPIN: Instruct CUDA to actively spin when waiting for results from the GPU. This can decrease latency when waiting for the GPU, but may lower the performance of CPU threads if they are performing work in parallel with the CUDA thread.

  • CU_CTX_SCHED_YIELD: Instruct CUDA to yield its thread when waiting for results from the GPU. This can increase latency when waiting for the GPU, but can increase the performance of CPU threads performing work in parallel with the GPU.

  • CU_CTX_SCHED_BLOCKING_SYNC: Instruct CUDA to block the CPU thread on a synchronization primitive when waiting for the GPU to finish work.

  • CU_CTX_BLOCKING_SYNC: Instruct CUDA to block the CPU thread on a synchronization primitive when waiting for the GPU to finish work. Deprecated: This flag was deprecated as of CUDA 4.0 and was replaced with CU_CTX_SCHED_BLOCKING_SYNC.

  • CU_CTX_SCHED_AUTO: The default value if the flags parameter is zero, uses a heuristic based on the number of active CUDA contexts in the process C and the number of logical processors in the system P. If C > P, then CUDA will yield to other OS threads when waiting for the GPU (CU_CTX_SCHED_YIELD), otherwise CUDA will not yield while waiting for results and actively spin on the processor (CU_CTX_SCHED_SPIN). Additionally, on Tegra devices, CU_CTX_SCHED_AUTO uses a heuristic based on the power profile of the platform and may choose CU_CTX_SCHED_BLOCKING_SYNC for low-powered devices.

  • CU_CTX_MAP_HOST: Instruct CUDA to support mapped pinned allocations. This flag must be set in order to allocate pinned host memory that is accessible to the GPU.

  • CU_CTX_LMEM_RESIZE_TO_MAX: Instruct CUDA to not reduce local memory after resizing local memory for a kernel. This can prevent thrashing by local memory allocations when launching many kernels with high local memory usage at the cost of potentially increased memory usage. Deprecated: This flag is deprecated and the behavior enabled by this flag is now the default and cannot be disabled. Instead, the per-thread stack size can be controlled with cuCtxSetLimit().

  • CU_CTX_COREDUMP_ENABLE: If GPU coredumps have not been enabled globally with cuCoredumpSetAttributeGlobal or environment variables, this flag can be set during context creation to instruct CUDA to create a coredump if this context raises an exception during execution. These environment variables are described in the CUDA-GDB user guide under the “GPU core dump support” section. The initial attributes will be taken from the global attributes at the time of context creation. The other attributes that control coredump output can be modified by calling cuCoredumpSetAttribute from the created context after it becomes current.

  • CU_CTX_USER_COREDUMP_ENABLE: If user-triggered GPU coredumps have not been enabled globally with cuCoredumpSetAttributeGlobal or environment variables, this flag can be set during context creation to instruct CUDA to create a coredump if data is written to a certain pipe that is present in the OS space. These environment variables are described in the CUDA-GDB user guide under the “GPU core dump support” section. It is important to note that the pipe name must be set with cuCoredumpSetAttributeGlobal before creating the context if this flag is used. Setting this flag implies that CU_CTX_COREDUMP_ENABLE is set. The initial attributes will be taken from the global attributes at the time of context creation. The other attributes that control coredump output can be modified by calling cuCoredumpSetAttribute from the created context after it becomes current. Setting this flag on any context creation is equivalent to setting the CU_COREDUMP_ENABLE_USER_TRIGGER attribute to true globally.

  • CU_CTX_SYNC_MEMOPS: Ensures that synchronous memory operations initiated on this context will always synchronize. See further documentation in the section titled “API Synchronization behavior” to learn more about cases when synchronous memory operations can exhibit asynchronous behavior.

Context creation will fail with CUDA_ERROR_UNKNOWN if the compute mode of the device is CU_COMPUTEMODE_PROHIBITED. The function cuDeviceGetAttribute() can be used with CU_DEVICE_ATTRIBUTE_COMPUTE_MODE to determine the compute mode of the device. The nvidia-smi tool can be used to set the compute mode for * devices. Documentation for nvidia-smi can be obtained by passing a -h option to it.

Parameters:
  • flags (unsigned int) – Context creation flags

  • dev (CUdevice) – Device to create context on

Returns:

Notes

In most cases it is recommended to use cuDevicePrimaryCtxRetain.

cuda.bindings.driver.cuCtxCreate_v3(paramsArray: Optional[Tuple[CUexecAffinityParam] | List[CUexecAffinityParam]], int numParams, unsigned int flags, dev)

Create a CUDA context with execution affinity.

Creates a new CUDA context with execution affinity and associates it with the calling thread. The paramsArray and flags parameter are described below. The context is created with a usage count of 1 and the caller of cuCtxCreate() must call cuCtxDestroy() when done using the context. If a context is already current to the thread, it is supplanted by the newly created context and may be restored by a subsequent call to cuCtxPopCurrent().

The type and the amount of execution resource the context can use is limited by paramsArray and numParams. The paramsArray is an array of CUexecAffinityParam and the numParams describes the size of the array. If two CUexecAffinityParam in the array have the same type, the latter execution affinity parameter overrides the former execution affinity parameter. The supported execution affinity types are:

  • CU_EXEC_AFFINITY_TYPE_SM_COUNT limits the portion of SMs that the context can use. The portion of SMs is specified as the number of SMs via CUexecAffinitySmCount. This limit will be internally rounded up to the next hardware-supported amount. Hence, it is imperative to query the actual execution affinity of the context via cuCtxGetExecAffinity after context creation. Currently, this attribute is only supported under Volta+ MPS.

The three LSBs of the flags parameter can be used to control how the OS thread, which owns the CUDA context at the time of an API call, interacts with the OS scheduler when waiting for results from the GPU. Only one of the scheduling flags can be set when creating a context.

  • CU_CTX_SCHED_SPIN: Instruct CUDA to actively spin when waiting for results from the GPU. This can decrease latency when waiting for the GPU, but may lower the performance of CPU threads if they are performing work in parallel with the CUDA thread.

  • CU_CTX_SCHED_YIELD: Instruct CUDA to yield its thread when waiting for results from the GPU. This can increase latency when waiting for the GPU, but can increase the performance of CPU threads performing work in parallel with the GPU.

  • CU_CTX_SCHED_BLOCKING_SYNC: Instruct CUDA to block the CPU thread on a synchronization primitive when waiting for the GPU to finish work.

  • CU_CTX_BLOCKING_SYNC: Instruct CUDA to block the CPU thread on a synchronization primitive when waiting for the GPU to finish work. Deprecated: This flag was deprecated as of CUDA 4.0 and was replaced with CU_CTX_SCHED_BLOCKING_SYNC.

  • CU_CTX_SCHED_AUTO: The default value if the flags parameter is zero, uses a heuristic based on the number of active CUDA contexts in the process C and the number of logical processors in the system P. If C > P, then CUDA will yield to other OS threads when waiting for the GPU (CU_CTX_SCHED_YIELD), otherwise CUDA will not yield while waiting for results and actively spin on the processor (CU_CTX_SCHED_SPIN). Additionally, on Tegra devices, CU_CTX_SCHED_AUTO uses a heuristic based on the power profile of the platform and may choose CU_CTX_SCHED_BLOCKING_SYNC for low-powered devices.

  • CU_CTX_MAP_HOST: Instruct CUDA to support mapped pinned allocations. This flag must be set in order to allocate pinned host memory that is accessible to the GPU.

  • CU_CTX_LMEM_RESIZE_TO_MAX: Instruct CUDA to not reduce local memory after resizing local memory for a kernel. This can prevent thrashing by local memory allocations when launching many kernels with high local memory usage at the cost of potentially increased memory usage. Deprecated: This flag is deprecated and the behavior enabled by this flag is now the default and cannot be disabled. Instead, the per-thread stack size can be controlled with cuCtxSetLimit().

  • CU_CTX_COREDUMP_ENABLE: If GPU coredumps have not been enabled globally with cuCoredumpSetAttributeGlobal or environment variables, this flag can be set during context creation to instruct CUDA to create a coredump if this context raises an exception during execution. These environment variables are described in the CUDA-GDB user guide under the “GPU core dump support” section. The initial attributes will be taken from the global attributes at the time of context creation. The other attributes that control coredump output can be modified by calling cuCoredumpSetAttribute from the created context after it becomes current.

  • CU_CTX_USER_COREDUMP_ENABLE: If user-triggered GPU coredumps have not been enabled globally with cuCoredumpSetAttributeGlobal or environment variables, this flag can be set during context creation to instruct CUDA to create a coredump if data is written to a certain pipe that is present in the OS space. These environment variables are described in the CUDA-GDB user guide under the “GPU core dump support” section. It is important to note that the pipe name must be set with cuCoredumpSetAttributeGlobal before creating the context if this flag is used. Setting this flag implies that CU_CTX_COREDUMP_ENABLE is set. The initial attributes will be taken from the global attributes at the time of context creation. The other attributes that control coredump output can be modified by calling cuCoredumpSetAttribute from the created context after it becomes current. Setting this flag on any context creation is equivalent to setting the CU_COREDUMP_ENABLE_USER_TRIGGER attribute to true globally.

Context creation will fail with CUDA_ERROR_UNKNOWN if the compute mode of the device is CU_COMPUTEMODE_PROHIBITED. The function cuDeviceGetAttribute() can be used with CU_DEVICE_ATTRIBUTE_COMPUTE_MODE to determine the compute mode of the device. The nvidia-smi tool can be used to set the compute mode for * devices. Documentation for nvidia-smi can be obtained by passing a -h option to it.

Parameters:
  • paramsArray (List[CUexecAffinityParam]) – Execution affinity parameters

  • numParams (int) – Number of execution affinity parameters

  • flags (unsigned int) – Context creation flags

  • dev (CUdevice) – Device to create context on

Returns:

cuda.bindings.driver.cuCtxCreate_v4(CUctxCreateParams ctxCreateParams: Optional[CUctxCreateParams], unsigned int flags, dev)

Create a CUDA context.

Creates a new CUDA context and associates it with the calling thread. The flags parameter is described below. The context is created with a usage count of 1 and the caller of cuCtxCreate() must call cuCtxDestroy() when done using the context. If a context is already current to the thread, it is supplanted by the newly created context and may be restored by a subsequent call to cuCtxPopCurrent().

CUDA context can be created with execution affinity. The type and the amount of execution resource the context can use is limited by paramsArray and numExecAffinityParams in execAffinity. The paramsArray is an array of CUexecAffinityParam and the numExecAffinityParams describes the size of the paramsArray. If two CUexecAffinityParam in the array have the same type, the latter execution affinity parameter overrides the former execution affinity parameter. The supported execution affinity types are:

  • CU_EXEC_AFFINITY_TYPE_SM_COUNT limits the portion of SMs that the context can use. The portion of SMs is specified as the number of SMs via CUexecAffinitySmCount. This limit will be internally rounded up to the next hardware-supported amount. Hence, it is imperative to query the actual execution affinity of the context via cuCtxGetExecAffinity after context creation. Currently, this attribute is only supported under Volta+ MPS.

CUDA context can be created in CIG(CUDA in Graphics) mode by setting /p cigParams. Hardware support and software support for graphics clients can be determined using cuDeviceGetAttribute() with CU_DEVICE_ATTRIBUTE_D3D12_CIG_SUPPORTED. Data from graphics client is shared with CUDA via the /p sharedData in /pcigParams. For D3D12, /p sharedData is a ID3D12CommandQueue handle.

Either /p execAffinityParams or /p cigParams can be set to a non-null value. Setting both to a non-null value will result in an undefined behavior.

The three LSBs of the flags parameter can be used to control how the OS thread, which owns the CUDA context at the time of an API call, interacts with the OS scheduler when waiting for results from the GPU. Only one of the scheduling flags can be set when creating a context.

  • CU_CTX_SCHED_SPIN: Instruct CUDA to actively spin when waiting for results from the GPU. This can decrease latency when waiting for the GPU, but may lower the performance of CPU threads if they are performing work in parallel with the CUDA thread.

  • CU_CTX_SCHED_YIELD: Instruct CUDA to yield its thread when waiting for results from the GPU. This can increase latency when waiting for the GPU, but can increase the performance of CPU threads performing work in parallel with the GPU.

  • CU_CTX_SCHED_BLOCKING_SYNC: Instruct CUDA to block the CPU thread on a synchronization primitive when waiting for the GPU to finish work.

  • CU_CTX_BLOCKING_SYNC: Instruct CUDA to block the CPU thread on a synchronization primitive when waiting for the GPU to finish work. Deprecated: This flag was deprecated as of CUDA 4.0 and was replaced with CU_CTX_SCHED_BLOCKING_SYNC.

  • CU_CTX_SCHED_AUTO: The default value if the flags parameter is zero, uses a heuristic based on the number of active CUDA contexts in the process C and the number of logical processors in the system P. If C > P, then CUDA will yield to other OS threads when waiting for the GPU (CU_CTX_SCHED_YIELD), otherwise CUDA will not yield while waiting for results and actively spin on the processor (CU_CTX_SCHED_SPIN). Additionally, on Tegra devices, CU_CTX_SCHED_AUTO uses a heuristic based on the power profile of the platform and may choose CU_CTX_SCHED_BLOCKING_SYNC for low-powered devices.

  • CU_CTX_MAP_HOST: Instruct CUDA to support mapped pinned allocations. This flag must be set in order to allocate pinned host memory that is accessible to the GPU.

  • CU_CTX_LMEM_RESIZE_TO_MAX: Instruct CUDA to not reduce local memory after resizing local memory for a kernel. This can prevent thrashing by local memory allocations when launching many kernels with high local memory usage at the cost of potentially increased memory usage. Deprecated: This flag is deprecated and the behavior enabled by this flag is now the default and cannot be disabled. Instead, the per-thread stack size can be controlled with cuCtxSetLimit().

  • CU_CTX_COREDUMP_ENABLE: If GPU coredumps have not been enabled globally with cuCoredumpSetAttributeGlobal or environment variables, this flag can be set during context creation to instruct CUDA to create a coredump if this context raises an exception during execution. These environment variables are described in the CUDA-GDB user guide under the “GPU core dump support” section. The initial attributes will be taken from the global attributes at the time of context creation. The other attributes that control coredump output can be modified by calling cuCoredumpSetAttribute from the created context after it becomes current. This flag is not supported when CUDA context is created in CIG(CUDA in Graphics) mode.

  • CU_CTX_USER_COREDUMP_ENABLE: If user-triggered GPU coredumps have not been enabled globally with cuCoredumpSetAttributeGlobal or environment variables, this flag can be set during context creation to instruct CUDA to create a coredump if data is written to a certain pipe that is present in the OS space. These environment variables are described in the CUDA-GDB user guide under the “GPU core dump support” section. It is important to note that the pipe name must be set with cuCoredumpSetAttributeGlobal before creating the context if this flag is used. Setting this flag implies that CU_CTX_COREDUMP_ENABLE is set. The initial attributes will be taken from the global attributes at the time of context creation. The other attributes that control coredump output can be modified by calling cuCoredumpSetAttribute from the created context after it becomes current. Setting this flag on any context creation is equivalent to setting the CU_COREDUMP_ENABLE_USER_TRIGGER attribute to true globally. This flag is not supported when CUDA context is created in CIG(CUDA in Graphics) mode.

  • CU_CTX_SYNC_MEMOPS: Ensures that synchronous memory operations initiated on this context will always synchronize. See further documentation in the section titled “API Synchronization behavior” to learn more about cases when synchronous memory operations can exhibit asynchronous behavior.

Context creation will fail with CUDA_ERROR_UNKNOWN if the compute mode of the device is CU_COMPUTEMODE_PROHIBITED. The function cuDeviceGetAttribute() can be used with CU_DEVICE_ATTRIBUTE_COMPUTE_MODE to determine the compute mode of the device. The nvidia-smi tool can be used to set the compute mode for * devices. Documentation for nvidia-smi can be obtained by passing a -h option to it.

Context creation will fail with :: CUDA_ERROR_INVALID_VALUE if invalid parameter was passed by client to create the CUDA context.

Context creation in CIG mode will fail with CUDA_ERROR_NOT_SUPPORTED if CIG is not supported by the device or the driver.

Parameters:
  • ctxCreateParams (CUctxCreateParams) – Context creation parameters

  • flags (unsigned int) – Context creation flags

  • dev (CUdevice) – Device to create context on

Returns:

cuda.bindings.driver.cuCtxDestroy(ctx)

Destroy a CUDA context.

Destroys the CUDA context specified by ctx. The context ctx will be destroyed regardless of how many threads it is current to. It is the responsibility of the calling function to ensure that no API call issues using ctx while cuCtxDestroy() is executing.

Destroys and cleans up all resources associated with the context. It is the caller’s responsibility to ensure that the context or its resources are not accessed or passed in subsequent API calls and doing so will result in undefined behavior. These resources include CUDA types CUmodule, CUfunction, CUstream, CUevent, CUarray, CUmipmappedArray, CUtexObject, CUsurfObject, CUtexref, CUsurfref, CUgraphicsResource, CUlinkState, CUexternalMemory and CUexternalSemaphore. These resources also include memory allocations by cuMemAlloc(), cuMemAllocHost(), cuMemAllocManaged() and cuMemAllocPitch().

If ctx is current to the calling thread then ctx will also be popped from the current thread’s context stack (as though cuCtxPopCurrent() were called). If ctx is current to other threads, then ctx will remain current to those threads, and attempting to access ctx from those threads will result in the error CUDA_ERROR_CONTEXT_IS_DESTROYED.

Parameters:

ctx (CUcontext) – Context to destroy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

Notes

cuCtxDestroy() will not destroy memory allocations by cuMemCreate(), cuMemAllocAsync() and cuMemAllocFromPoolAsync(). These memory allocations are not associated with any CUDA context and need to be destroyed explicitly.

cuda.bindings.driver.cuCtxPushCurrent(ctx)

Pushes a context on the current CPU thread.

Pushes the given context ctx onto the CPU thread’s stack of current contexts. The specified context becomes the CPU thread’s current context, so all CUDA functions that operate on the current context are affected.

The previous current context may be made current again by calling cuCtxDestroy() or cuCtxPopCurrent().

Parameters:

ctx (CUcontext) – Context to push

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuCtxPopCurrent()

Pops the current CUDA context from the current CPU thread.

Pops the current CUDA context from the CPU thread and passes back the old context handle in *pctx. That context may then be made current to a different CPU thread by calling cuCtxPushCurrent().

If a context was current to the CPU thread before cuCtxCreate() or cuCtxPushCurrent() was called, this function makes that context current to the CPU thread again.

Returns:

cuda.bindings.driver.cuCtxSetCurrent(ctx)

Binds the specified CUDA context to the calling CPU thread.

Binds the specified CUDA context to the calling CPU thread. If ctx is NULL then the CUDA context previously bound to the calling CPU thread is unbound and CUDA_SUCCESS is returned.

If there exists a CUDA context stack on the calling CPU thread, this will replace the top of that stack with ctx. If ctx is NULL then this will be equivalent to popping the top of the calling CPU thread’s CUDA context stack (or a no-op if the calling CPU thread’s CUDA context stack is empty).

Parameters:

ctx (CUcontext) – Context to bind to the calling CPU thread

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT

Return type:

CUresult

cuda.bindings.driver.cuCtxGetCurrent()

Returns the CUDA context bound to the calling CPU thread.

Returns in *pctx the CUDA context bound to the calling CPU thread. If no context is bound to the calling CPU thread then *pctx is set to NULL and CUDA_SUCCESS is returned.

Returns:

cuda.bindings.driver.cuCtxGetDevice()

Returns the device ID for the current context.

Returns in *device the ordinal of the current context’s device.

Returns:

cuda.bindings.driver.cuCtxGetFlags()

Returns the flags for the current context.

Returns in *flags the flags of the current context. See cuCtxCreate for flag values.

Returns:

cuda.bindings.driver.cuCtxSetFlags(unsigned int flags)

Sets the flags for the current context.

Sets the flags for the current context overwriting previously set ones. See cuDevicePrimaryCtxSetFlags for flag values.

Parameters:

flags (unsigned int) – Flags to set on the current context

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuCtxGetId(ctx)

Returns the unique Id associated with the context supplied.

Returns in ctxId the unique Id which is associated with a given context. The Id is unique for the life of the program for this instance of CUDA. If context is supplied as NULL and there is one current, the Id of the current context is returned.

Parameters:

ctx (CUcontext) – Context for which to obtain the Id

Returns:

cuda.bindings.driver.cuCtxSynchronize()

Block for the current context’s tasks to complete.

Blocks until the current context has completed all preceding requested tasks. If the current context is the primary context, green contexts that have been created will also be synchronized. cuCtxSynchronize() returns an error if one of the preceding tasks failed. If the context was created with the CU_CTX_SCHED_BLOCKING_SYNC flag, the CPU thread will block until the GPU context has finished its work.

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT

Return type:

CUresult

cuda.bindings.driver.cuCtxSetLimit(limit: CUlimit, size_t value)

Set resource limits.

Setting limit to value is a request by the application to update the current limit maintained by the context. The driver is free to modify the requested value to meet h/w requirements (this could be clamping to minimum or maximum values, rounding up to nearest element size, etc). The application can use cuCtxGetLimit() to find out exactly what the limit has been set to.

Setting each CUlimit has its own specific restrictions, so each is discussed here.

  • CU_LIMIT_STACK_SIZE controls the stack size in bytes of each GPU thread. The driver automatically increases the per-thread stack size for each kernel launch as needed. This size isn’t reset back to the original value after each launch. Setting this value will take effect immediately, and if necessary, the device will block until all preceding requested tasks are complete.

  • CU_LIMIT_PRINTF_FIFO_SIZE controls the size in bytes of the FIFO used by the printf() device system call. Setting CU_LIMIT_PRINTF_FIFO_SIZE must be performed before launching any kernel that uses the printf() device system call, otherwise CUDA_ERROR_INVALID_VALUE will be returned.

  • CU_LIMIT_MALLOC_HEAP_SIZE controls the size in bytes of the heap used by the malloc() and free() device system calls. Setting CU_LIMIT_MALLOC_HEAP_SIZE must be performed before launching any kernel that uses the malloc() or free() device system calls, otherwise CUDA_ERROR_INVALID_VALUE will be returned.

  • CU_LIMIT_DEV_RUNTIME_SYNC_DEPTH controls the maximum nesting depth of a grid at which a thread can safely call cudaDeviceSynchronize(). Setting this limit must be performed before any launch of a kernel that uses the device runtime and calls cudaDeviceSynchronize() above the default sync depth, two levels of grids. Calls to cudaDeviceSynchronize() will fail with error code cudaErrorSyncDepthExceeded if the limitation is violated. This limit can be set smaller than the default or up the maximum launch depth of 24. When setting this limit, keep in mind that additional levels of sync depth require the driver to reserve large amounts of device memory which can no longer be used for user allocations. If these reservations of device memory fail, cuCtxSetLimit() will return CUDA_ERROR_OUT_OF_MEMORY, and the limit can be reset to a lower value. This limit is only applicable to devices of compute capability < 9.0. Attempting to set this limit on devices of other compute capability versions will result in the error CUDA_ERROR_UNSUPPORTED_LIMIT being returned.

  • CU_LIMIT_DEV_RUNTIME_PENDING_LAUNCH_COUNT controls the maximum number of outstanding device runtime launches that can be made from the current context. A grid is outstanding from the point of launch up until the grid is known to have been completed. Device runtime launches which violate this limitation fail and return cudaErrorLaunchPendingCountExceeded when cudaGetLastError() is called after launch. If more pending launches than the default (2048 launches) are needed for a module using the device runtime, this limit can be increased. Keep in mind that being able to sustain additional pending launches will require the driver to reserve larger amounts of device memory upfront which can no longer be used for allocations. If these reservations fail, cuCtxSetLimit() will return CUDA_ERROR_OUT_OF_MEMORY, and the limit can be reset to a lower value. This limit is only applicable to devices of compute capability 3.5 and higher. Attempting to set this limit on devices of compute capability less than 3.5 will result in the error CUDA_ERROR_UNSUPPORTED_LIMIT being returned.

  • CU_LIMIT_MAX_L2_FETCH_GRANULARITY controls the L2 cache fetch granularity. Values can range from 0B to 128B. This is purely a performance hint and it can be ignored or clamped depending on the platform.

  • CU_LIMIT_PERSISTING_L2_CACHE_SIZE controls size in bytes available for persisting L2 cache. This is purely a performance hint and it can be ignored or clamped depending on the platform.

Parameters:
  • limit (CUlimit) – Limit to set

  • value (size_t) – Size of limit

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_UNSUPPORTED_LIMIT, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_INVALID_CONTEXT

Return type:

CUresult

cuda.bindings.driver.cuCtxGetLimit(limit: CUlimit)

Returns resource limits.

Returns in *pvalue the current size of limit. The supported CUlimit values are:

Parameters:

limit (CUlimit) – Limit to query

Returns:

cuda.bindings.driver.cuCtxGetCacheConfig()

Returns the preferred cache configuration for the current context.

On devices where the L1 cache and shared memory use the same hardware resources, this function returns through pconfig the preferred cache configuration for the current context. This is only a preference. The driver will use the requested configuration if possible, but it is free to choose a different configuration if required to execute functions.

This will return a pconfig of CU_FUNC_CACHE_PREFER_NONE on devices where the size of the L1 cache and shared memory are fixed.

The supported cache configurations are:

Returns:

cuda.bindings.driver.cuCtxSetCacheConfig(config: CUfunc_cache)

Sets the preferred cache configuration for the current context.

On devices where the L1 cache and shared memory use the same hardware resources, this sets through config the preferred cache configuration for the current context. This is only a preference. The driver will use the requested configuration if possible, but it is free to choose a different configuration if required to execute the function. Any function preference set via cuFuncSetCacheConfig() or cuKernelSetCacheConfig() will be preferred over this context-wide setting. Setting the context-wide cache configuration to CU_FUNC_CACHE_PREFER_NONE will cause subsequent kernel launches to prefer to not change the cache configuration unless required to launch the kernel.

This setting does nothing on devices where the size of the L1 cache and shared memory are fixed.

Launching a kernel with a different preference than the most recent preference setting may insert a device-side synchronization point.

The supported cache configurations are:

Parameters:

config (CUfunc_cache) – Requested cache configuration

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuCtxGetApiVersion(ctx)

Gets the context’s API version.

Returns a version number in version corresponding to the capabilities of the context (e.g. 3010 or 3020), which library developers can use to direct callers to a specific API version. If ctx is NULL, returns the API version used to create the currently bound context.

Note that new API versions are only introduced when context capabilities are changed that break binary compatibility, so the API version and driver version may be different. For example, it is valid for the API version to be 3020 while the driver version is 4020.

Parameters:

ctx (CUcontext) – Context to check

Returns:

cuda.bindings.driver.cuCtxGetStreamPriorityRange()

Returns numerical values that correspond to the least and greatest stream priorities.

Returns in *leastPriority and *greatestPriority the numerical values that correspond to the least and greatest stream priorities respectively. Stream priorities follow a convention where lower numbers imply greater priorities. The range of meaningful stream priorities is given by [*greatestPriority, *leastPriority]. If the user attempts to create a stream with a priority value that is outside the meaningful range as specified by this API, the priority is automatically clamped down or up to either *leastPriority or *greatestPriority respectively. See cuStreamCreateWithPriority for details on creating a priority stream. A NULL may be passed in for *leastPriority or *greatestPriority if the value is not desired.

This function will return ‘0’ in both *leastPriority and *greatestPriority if the current context’s device does not support stream priorities (see cuDeviceGetAttribute).

Returns:

  • CUresultCUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

  • leastPriority (int) – Pointer to an int in which the numerical value for least stream priority is returned

  • greatestPriority (int) – Pointer to an int in which the numerical value for greatest stream priority is returned

cuda.bindings.driver.cuCtxResetPersistingL2Cache()

Resets all persisting lines in cache to normal status.

cuCtxResetPersistingL2Cache Resets all persisting lines in cache to normal status. Takes effect on function return.

Returns:

CUDA_SUCCESS, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuCtxGetExecAffinity(typename: CUexecAffinityType)

Returns the execution affinity setting for the current context.

Returns in *pExecAffinity the current value of typename. The supported CUexecAffinityType values are:

Parameters:

typename (CUexecAffinityType) – Execution affinity type to query

Returns:

cuda.bindings.driver.cuCtxRecordEvent(hCtx, hEvent)

Records an event.

Captures in hEvent all the activities of the context hCtx at the time of this call. hEvent and hCtx must be from the same CUDA context, otherwise CUDA_ERROR_INVALID_HANDLE will be returned. Calls such as cuEventQuery() or cuCtxWaitEvent() will then examine or wait for completion of the work that was captured. Uses of hCtx after this call do not modify hEvent. If the context passed to hCtx is the primary context, hEvent will capture all the activities of the primary context and its green contexts. If the context passed to hCtx is a context converted from green context via cuCtxFromGreenCtx(), hEvent will capture only the activities of the green context.

Parameters:
Returns:

CUDA_SUCCESS CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED

Return type:

CUresult

Notes

The API will return CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED if the specified context hCtx has a stream in the capture mode. In such a case, the call will invalidate all the conflicting captures.

cuda.bindings.driver.cuCtxWaitEvent(hCtx, hEvent)

Make a context wait on an event.

Makes all future work submitted to context hCtx wait for all work captured in hEvent. The synchronization will be performed on the device and will not block the calling CPU thread. See cuCtxRecordEvent() for details on what is captured by an event. If the context passed to hCtx is the primary context, the primary context and its green contexts will wait for hEvent. If the context passed to hCtx is a context converted from green context via cuCtxFromGreenCtx(), the green context will wait for hEvent.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED

Return type:

CUresult

Notes

hEvent may be from a different context or device than hCtx.

The API will return CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED and invalidate the capture if the specified event hEvent is part of an ongoing capture sequence or if the specified context hCtx has a stream in the capture mode.

Module Management

This section describes the module management functions of the low-level CUDA driver application programming interface.

class cuda.bindings.driver.CUmoduleLoadingMode(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

CUDA Lazy Loading status

CU_MODULE_EAGER_LOADING = 1

Lazy Kernel Loading is not enabled

CU_MODULE_LAZY_LOADING = 2

Lazy Kernel Loading is enabled

cuda.bindings.driver.cuModuleLoad(char *fname)

Loads a compute module.

Takes a filename fname and loads the corresponding module module into the current context. The CUDA driver API does not attempt to lazily allocate the resources needed by a module; if the memory for functions and data (constant and global) needed by the module cannot be allocated, cuModuleLoad() fails. The file should be a cubin file as output by nvcc, or a PTX file either as output by nvcc or handwritten, or a fatbin file as output by nvcc from toolchain 4.0 or later.

Parameters:

fname (bytes) – Filename of module to load

Returns:

cuda.bindings.driver.cuModuleLoadData(image)

Load a module’s data.

Takes a pointer image and loads the corresponding module module into the current context. The image may be a cubin or fatbin as output by nvcc, or a NULL-terminated PTX, either as output by nvcc or hand-written.

Parameters:

image (Any) – Module data to load

Returns:

cuda.bindings.driver.cuModuleLoadDataEx(image, unsigned int numOptions, options: Optional[Tuple[CUjit_option] | List[CUjit_option]], optionValues: Optional[Tuple[Any] | List[Any]])

Load a module’s data with options.

Takes a pointer image and loads the corresponding module module into the current context. The image may be a cubin or fatbin as output by nvcc, or a NULL-terminated PTX, either as output by nvcc or hand-written.

Parameters:
  • image (Any) – Module data to load

  • numOptions (unsigned int) – Number of options

  • options (List[CUjit_option]) – Options for JIT

  • optionValues (List[Any]) – Option values for JIT

Returns:

cuda.bindings.driver.cuModuleLoadFatBinary(fatCubin)

Load a module’s data.

Takes a pointer fatCubin and loads the corresponding module module into the current context. The pointer represents a fat binary object, which is a collection of different cubin and/or PTX files, all representing the same device code, but compiled and optimized for different architectures.

Prior to CUDA 4.0, there was no documented API for constructing and using fat binary objects by programmers. Starting with CUDA 4.0, fat binary objects can be constructed by providing the -fatbin option to nvcc. More information can be found in the nvcc document.

Parameters:

fatCubin (Any) – Fat binary to load

Returns:

cuda.bindings.driver.cuModuleUnload(hmod)

Unloads a module.

Unloads a module hmod from the current context. Attempting to unload a module which was obtained from the Library Management API such as cuLibraryGetModule will return CUDA_ERROR_NOT_PERMITTED.

Parameters:

hmod (CUmodule) – Module to unload

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_PERMITTED

Return type:

CUresult

cuda.bindings.driver.cuModuleGetLoadingMode()

Query lazy loading mode.

Returns lazy loading mode Module loading mode is controlled by CUDA_MODULE_LOADING env variable

Returns:

See also

cuModuleLoad

cuda.bindings.driver.cuModuleGetFunction(hmod, char *name)

Returns a function handle.

Returns in *hfunc the handle of the function of name name located in module hmod. If no function of that name exists, cuModuleGetFunction() returns CUDA_ERROR_NOT_FOUND.

Parameters:
  • hmod (CUmodule) – Module to retrieve function from

  • name (bytes) – Name of function to retrieve

Returns:

cuda.bindings.driver.cuModuleGetFunctionCount(mod)

Returns the number of functions within a module.

Returns in count the number of functions in mod.

Parameters:

mod (CUmodule) – Module to query

Returns:

cuda.bindings.driver.cuModuleEnumerateFunctions(unsigned int numFunctions, mod)

Returns the function handles within a module.

Returns in functions a maximum number of numFunctions function handles within mod. When function loading mode is set to LAZY the function retrieved may be partially loaded. The loading state of a function can be queried using cuFunctionIsLoaded. CUDA APIs may load the function automatically when called with partially loaded function handle which may incur additional latency. Alternatively, cuFunctionLoad can be used to explicitly load a function. The returned function handles become invalid when the module is unloaded.

Parameters:
  • numFunctions (unsigned int) – Maximum number of function handles may be returned to the buffer

  • mod (CUmodule) – Module to query from

Returns:

cuda.bindings.driver.cuModuleGetGlobal(hmod, char *name)

Returns a global pointer from a module.

Returns in *dptr and *bytes the base pointer and size of the global of name name located in module hmod. If no variable of that name exists, cuModuleGetGlobal() returns CUDA_ERROR_NOT_FOUND. One of the parameters dptr or numbytes (not both) can be NULL in which case it is ignored.

Parameters:
  • hmod (CUmodule) – Module to retrieve global from

  • name (bytes) – Name of global to retrieve

Returns:

See also

cuModuleGetFunction, cuModuleGetTexRef, cuModuleLoad, cuModuleLoadData, cuModuleLoadDataEx, cuModuleLoadFatBinary, cuModuleUnload, cudaGetSymbolAddress, cudaGetSymbolSize

cuda.bindings.driver.cuLinkCreate(unsigned int numOptions, options: Optional[Tuple[CUjit_option] | List[CUjit_option]], optionValues: Optional[Tuple[Any] | List[Any]])

Creates a pending JIT linker invocation.

If the call is successful, the caller owns the returned CUlinkState, which should eventually be destroyed with cuLinkDestroy. The device code machine size (32 or 64 bit) will match the calling application.

Both linker and compiler options may be specified. Compiler options will be applied to inputs to this linker action which must be compiled from PTX. The options CU_JIT_WALL_TIME, CU_JIT_INFO_LOG_BUFFER_SIZE_BYTES, and CU_JIT_ERROR_LOG_BUFFER_SIZE_BYTES will accumulate data until the CUlinkState is destroyed.

The data passed in via cuLinkAddData and cuLinkAddFile will be treated as relocatable (-rdc=true to nvcc) when linking the final cubin during cuLinkComplete and will have similar consequences as offline relocatable device code linking.

optionValues must remain valid for the life of the CUlinkState if output options are used. No other references to inputs are maintained after this call returns.

Parameters:
  • numOptions (unsigned int) – Size of options arrays

  • options (List[CUjit_option]) – Array of linker and compiler options

  • optionValues (List[Any]) – Array of option values, each cast to void *

Returns:

Notes

For LTO-IR input, only LTO-IR compiled with toolkits prior to CUDA 12.0 will be accepted

cuda.bindings.driver.cuLinkAddData(state, typename: CUjitInputType, data, size_t size, char *name, unsigned int numOptions, options: Optional[Tuple[CUjit_option] | List[CUjit_option]], optionValues: Optional[Tuple[Any] | List[Any]])

Add an input to a pending linker invocation.

Ownership of data is retained by the caller. No reference is retained to any inputs after this call returns.

This method accepts only compiler options, which are used if the data must be compiled from PTX, and does not accept any of CU_JIT_WALL_TIME, CU_JIT_INFO_LOG_BUFFER, CU_JIT_ERROR_LOG_BUFFER, CU_JIT_TARGET_FROM_CUCONTEXT, or CU_JIT_TARGET.

Parameters:
  • state (CUlinkState) – A pending linker action.

  • typename (CUjitInputType) – The type of the input data.

  • data (Any) – The input data. PTX must be NULL-terminated.

  • size (size_t) – The length of the input data.

  • name (bytes) – An optional name for this input in log messages.

  • numOptions (unsigned int) – Size of options.

  • options (List[CUjit_option]) – Options to be applied only for this input (overrides options from cuLinkCreate).

  • optionValues (List[Any]) – Array of option values, each cast to void *.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_IMAGE, CUDA_ERROR_INVALID_PTX, CUDA_ERROR_UNSUPPORTED_PTX_VERSION, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_NO_BINARY_FOR_GPU

Return type:

CUresult

Notes

For LTO-IR input, only LTO-IR compiled with toolkits prior to CUDA 12.0 will be accepted

cuda.bindings.driver.cuLinkAddFile(state, typename: CUjitInputType, char *path, unsigned int numOptions, options: Optional[Tuple[CUjit_option] | List[CUjit_option]], optionValues: Optional[Tuple[Any] | List[Any]])

Add a file input to a pending linker invocation.

No reference is retained to any inputs after this call returns.

This method accepts only compiler options, which are used if the input must be compiled from PTX, and does not accept any of CU_JIT_WALL_TIME, CU_JIT_INFO_LOG_BUFFER, CU_JIT_ERROR_LOG_BUFFER, CU_JIT_TARGET_FROM_CUCONTEXT, or CU_JIT_TARGET.

This method is equivalent to invoking cuLinkAddData on the contents of the file.

Parameters:
  • state (CUlinkState) – A pending linker action

  • typename (CUjitInputType) – The type of the input data

  • path (bytes) – Path to the input file

  • numOptions (unsigned int) – Size of options

  • options (List[CUjit_option]) – Options to be applied only for this input (overrides options from cuLinkCreate)

  • optionValues (List[Any]) – Array of option values, each cast to void *

Returns:

CUDA_SUCCESS, CUDA_ERROR_FILE_NOT_FOUND CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_IMAGE, CUDA_ERROR_INVALID_PTX, CUDA_ERROR_UNSUPPORTED_PTX_VERSION, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_NO_BINARY_FOR_GPU

Return type:

CUresult

Notes

For LTO-IR input, only LTO-IR compiled with toolkits prior to CUDA 12.0 will be accepted

cuda.bindings.driver.cuLinkComplete(state)

Complete a pending linker invocation.

Completes the pending linker action and returns the cubin image for the linked device code, which can be used with cuModuleLoadData. The cubin is owned by state, so it should be loaded before state is destroyed via cuLinkDestroy. This call does not destroy state.

Parameters:

state (CUlinkState) – A pending linker invocation

Returns:

cuda.bindings.driver.cuLinkDestroy(state)

Destroys state for a JIT linker invocation.

Parameters:

state (CUlinkState) – State object for the linker invocation

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

See also

cuLinkCreate

Library Management

This section describes the library management functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuLibraryLoadData(code, jitOptions: Optional[Tuple[CUjit_option] | List[CUjit_option]], jitOptionsValues: Optional[Tuple[Any] | List[Any]], unsigned int numJitOptions, libraryOptions: Optional[Tuple[CUlibraryOption] | List[CUlibraryOption]], libraryOptionValues: Optional[Tuple[Any] | List[Any]], unsigned int numLibraryOptions)

Load a library with specified code and options.

Takes a pointer code and loads the corresponding library library based on the application defined library loading mode:

  • If module loading is set to EAGER, via the environment variables described in “Module loading”, library is loaded eagerly into all contexts at the time of the call and future contexts at the time of creation until the library is unloaded with cuLibraryUnload().

  • If the environment variables are set to LAZY, library is not immediately loaded onto all existent contexts and will only be loaded when a function is needed for that context, such as a kernel launch.

These environment variables are described in the CUDA programming guide under the “CUDA environment variables” section.

The code may be a cubin or fatbin as output by nvcc, or a NULL- terminated PTX, either as output by nvcc or hand-written. A fatbin should also contain relocatable code when doing separate compilation.

Options are passed as an array via jitOptions and any corresponding parameters are passed in jitOptionsValues. The number of total JIT options is supplied via numJitOptions. Any outputs will be returned via jitOptionsValues.

Library load options are passed as an array via libraryOptions and any corresponding parameters are passed in libraryOptionValues. The number of total library load options is supplied via numLibraryOptions.

Parameters:
  • code (Any) – Code to load

  • jitOptions (List[CUjit_option]) – Options for JIT

  • jitOptionsValues (List[Any]) – Option values for JIT

  • numJitOptions (unsigned int) – Number of options

  • libraryOptions (List[CUlibraryOption]) – Options for loading

  • libraryOptionValues (List[Any]) – Option values for loading

  • numLibraryOptions (unsigned int) – Number of options for loading

Returns:

Notes

If the library contains managed variables and no device in the system supports managed variables this call is expected to return CUDA_ERROR_NOT_SUPPORTED

cuda.bindings.driver.cuLibraryLoadFromFile(char *fileName, jitOptions: Optional[Tuple[CUjit_option] | List[CUjit_option]], jitOptionsValues: Optional[Tuple[Any] | List[Any]], unsigned int numJitOptions, libraryOptions: Optional[Tuple[CUlibraryOption] | List[CUlibraryOption]], libraryOptionValues: Optional[Tuple[Any] | List[Any]], unsigned int numLibraryOptions)

Load a library with specified file and options.

Takes a pointer code and loads the corresponding library library based on the application defined library loading mode:

  • If module loading is set to EAGER, via the environment variables described in “Module loading”, library is loaded eagerly into all contexts at the time of the call and future contexts at the time of creation until the library is unloaded with cuLibraryUnload().

  • If the environment variables are set to LAZY, library is not immediately loaded onto all existent contexts and will only be loaded when a function is needed for that context, such as a kernel launch.

These environment variables are described in the CUDA programming guide under the “CUDA environment variables” section.

The file should be a cubin file as output by nvcc, or a PTX file either as output by nvcc or handwritten, or a fatbin file as output by nvcc. A fatbin should also contain relocatable code when doing separate compilation.

Options are passed as an array via jitOptions and any corresponding parameters are passed in jitOptionsValues. The number of total options is supplied via numJitOptions. Any outputs will be returned via jitOptionsValues.

Library load options are passed as an array via libraryOptions and any corresponding parameters are passed in libraryOptionValues. The number of total library load options is supplied via numLibraryOptions.

Parameters:
  • fileName (bytes) – File to load from

  • jitOptions (List[CUjit_option]) – Options for JIT

  • jitOptionsValues (List[Any]) – Option values for JIT

  • numJitOptions (unsigned int) – Number of options

  • libraryOptions (List[CUlibraryOption]) – Options for loading

  • libraryOptionValues (List[Any]) – Option values for loading

  • numLibraryOptions (unsigned int) – Number of options for loading

Returns:

Notes

If the library contains managed variables and no device in the system supports managed variables this call is expected to return CUDA_ERROR_NOT_SUPPORTED

cuda.bindings.driver.cuLibraryUnload(library)

Unloads a library.

Unloads the library specified with library

Parameters:

library (CUlibrary) – Library to unload

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuLibraryGetKernel(library, char *name)

Returns a kernel handle.

Returns in pKernel the handle of the kernel with name name located in library library. If kernel handle is not found, the call returns CUDA_ERROR_NOT_FOUND.

Parameters:
  • library (CUlibrary) – Library to retrieve kernel from

  • name (bytes) – Name of kernel to retrieve

Returns:

cuda.bindings.driver.cuLibraryGetKernelCount(lib)

Returns the number of kernels within a library.

Returns in count the number of kernels in lib.

Parameters:

lib (CUlibrary) – Library to query

Returns:

cuda.bindings.driver.cuLibraryEnumerateKernels(unsigned int numKernels, lib)

Retrieve the kernel handles within a library.

Returns in kernels a maximum number of numKernels kernel handles within lib. The returned kernel handle becomes invalid when the library is unloaded.

Parameters:
  • numKernels (unsigned int) – Maximum number of kernel handles may be returned to the buffer

  • lib (CUlibrary) – Library to query from

Returns:

cuda.bindings.driver.cuLibraryGetModule(library)

Returns a module handle.

Returns in pMod the module handle associated with the current context located in library library. If module handle is not found, the call returns CUDA_ERROR_NOT_FOUND.

Parameters:

library (CUlibrary) – Library to retrieve module from

Returns:

cuda.bindings.driver.cuKernelGetFunction(kernel)

Returns a function handle.

Returns in pFunc the handle of the function for the requested kernel kernel and the current context. If function handle is not found, the call returns CUDA_ERROR_NOT_FOUND.

Parameters:

kernel (CUkernel) – Kernel to retrieve function for the requested context

Returns:

cuda.bindings.driver.cuKernelGetLibrary(kernel)

Returns a library handle.

Returns in pLib the handle of the library for the requested kernel kernel

Parameters:

kernel (CUkernel) – Kernel to retrieve library handle

Returns:

cuda.bindings.driver.cuLibraryGetGlobal(library, char *name)

Returns a global device pointer.

Returns in *dptr and *bytes the base pointer and size of the global with name name for the requested library library and the current context. If no global for the requested name name exists, the call returns CUDA_ERROR_NOT_FOUND. One of the parameters dptr or numbytes (not both) can be NULL in which case it is ignored.

Parameters:
  • library (CUlibrary) – Library to retrieve global from

  • name (bytes) – Name of global to retrieve

Returns:

cuda.bindings.driver.cuLibraryGetManaged(library, char *name)

Returns a pointer to managed memory.

Returns in *dptr and *bytes the base pointer and size of the managed memory with name name for the requested library library. If no managed memory with the requested name name exists, the call returns CUDA_ERROR_NOT_FOUND. One of the parameters dptr or numbytes (not both) can be NULL in which case it is ignored. Note that managed memory for library library is shared across devices and is registered when the library is loaded into atleast one context.

Parameters:
  • library (CUlibrary) – Library to retrieve managed memory from

  • name (bytes) – Name of managed memory to retrieve

Returns:

cuda.bindings.driver.cuLibraryGetUnifiedFunction(library, char *symbol)

Returns a pointer to a unified function.

Returns in *fptr the function pointer to a unified function denoted by symbol. If no unified function with name symbol exists, the call returns CUDA_ERROR_NOT_FOUND. If there is no device with attribute CU_DEVICE_ATTRIBUTE_UNIFIED_FUNCTION_POINTERS present in the system, the call may return CUDA_ERROR_NOT_FOUND.

Parameters:
  • library (CUlibrary) – Library to retrieve function pointer memory from

  • symbol (bytes) – Name of function pointer to retrieve

Returns:

cuda.bindings.driver.cuKernelGetAttribute(attrib: CUfunction_attribute, kernel, dev)

Returns information about a kernel.

Returns in *pi the integer value of the attribute attrib for the kernel kernel for the requested device dev. The supported attributes are:

  • CU_FUNC_ATTRIBUTE_MAX_THREADS_PER_BLOCK: The maximum number of threads per block, beyond which a launch of the kernel would fail. This number depends on both the kernel and the requested device.

  • CU_FUNC_ATTRIBUTE_SHARED_SIZE_BYTES: The size in bytes of statically-allocated shared memory per block required by this kernel. This does not include dynamically-allocated shared memory requested by the user at runtime.

  • CU_FUNC_ATTRIBUTE_CONST_SIZE_BYTES: The size in bytes of user-allocated constant memory required by this kernel.

  • CU_FUNC_ATTRIBUTE_LOCAL_SIZE_BYTES: The size in bytes of local memory used by each thread of this kernel.

  • CU_FUNC_ATTRIBUTE_NUM_REGS: The number of registers used by each thread of this kernel.

  • CU_FUNC_ATTRIBUTE_PTX_VERSION: The PTX virtual architecture version for which the kernel was compiled. This value is the major PTX version * 10

    • the minor PTX version, so a PTX version 1.3 function would return the value 13. Note that this may return the undefined value of 0 for cubins compiled prior to CUDA 3.0.

  • CU_FUNC_ATTRIBUTE_BINARY_VERSION: The binary architecture version for which the kernel was compiled. This value is the major binary version * 10 + the minor binary version, so a binary version 1.3 function would return the value 13. Note that this will return a value of 10 for legacy cubins that do not have a properly-encoded binary architecture version.

  • CU_FUNC_CACHE_MODE_CA: The attribute to indicate whether the kernel has been compiled with user specified option “-Xptxas –dlcm=ca” set.

  • CU_FUNC_ATTRIBUTE_MAX_DYNAMIC_SHARED_SIZE_BYTES: The maximum size in bytes of dynamically-allocated shared memory.

  • CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT: Preferred shared memory-L1 cache split ratio in percent of total shared memory.

  • CU_FUNC_ATTRIBUTE_CLUSTER_SIZE_MUST_BE_SET: If this attribute is set, the kernel must launch with a valid cluster size specified.

  • CU_FUNC_ATTRIBUTE_REQUIRED_CLUSTER_WIDTH: The required cluster width in blocks.

  • CU_FUNC_ATTRIBUTE_REQUIRED_CLUSTER_HEIGHT: The required cluster height in blocks.

  • CU_FUNC_ATTRIBUTE_REQUIRED_CLUSTER_DEPTH: The required cluster depth in blocks.

  • CU_FUNC_ATTRIBUTE_NON_PORTABLE_CLUSTER_SIZE_ALLOWED: Indicates whether the function can be launched with non-portable cluster size. 1 is allowed, 0 is disallowed. A non-portable cluster size may only function on the specific SKUs the program is tested on. The launch might fail if the program is run on a different hardware platform. CUDA API provides cudaOccupancyMaxActiveClusters to assist with checking whether the desired size can be launched on the current device. A portable cluster size is guaranteed to be functional on all compute capabilities higher than the target compute capability. The portable cluster size for sm_90 is 8 blocks per cluster. This value may increase for future compute capabilities. The specific hardware unit may support higher cluster sizes that’s not guaranteed to be portable.

  • CU_FUNC_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE: The block scheduling policy of a function. The value type is CUclusterSchedulingPolicy.

Parameters:
Returns:

Notes

If another thread is trying to set the same attribute on the same device using cuKernelSetAttribute() simultaneously, the attribute query will give the old or new value depending on the interleavings chosen by the OS scheduler and memory consistency.

cuda.bindings.driver.cuKernelSetAttribute(attrib: CUfunction_attribute, int val, kernel, dev)

Sets information about a kernel.

This call sets the value of a specified attribute attrib on the kernel kernel for the requested device dev to an integer value specified by val. This function returns CUDA_SUCCESS if the new value of the attribute could be successfully set. If the set fails, this call will return an error. Not all attributes can have values set. Attempting to set a value on a read-only attribute will result in an error (CUDA_ERROR_INVALID_VALUE)

Note that attributes set using cuFuncSetAttribute() will override the attribute set by this API irrespective of whether the call to cuFuncSetAttribute() is made before or after this API call. However, cuKernelGetAttribute() will always return the attribute value set by this API.

Supported attributes are:

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_OUT_OF_MEMORY

Return type:

CUresult

Notes

The API has stricter locking requirements in comparison to its legacy counterpart cuFuncSetAttribute() due to device-wide semantics. If multiple threads are trying to set the same attribute on the same device simultaneously, the attribute setting will depend on the interleavings chosen by the OS scheduler and memory consistency.

cuda.bindings.driver.cuKernelSetCacheConfig(kernel, config: CUfunc_cache, dev)

Sets the preferred cache configuration for a device kernel.

On devices where the L1 cache and shared memory use the same hardware resources, this sets through config the preferred cache configuration for the device kernel kernel on the requested device dev. This is only a preference. The driver will use the requested configuration if possible, but it is free to choose a different configuration if required to execute kernel. Any context-wide preference set via cuCtxSetCacheConfig() will be overridden by this per-kernel setting.

Note that attributes set using cuFuncSetCacheConfig() will override the attribute set by this API irrespective of whether the call to cuFuncSetCacheConfig() is made before or after this API call.

This setting does nothing on devices where the size of the L1 cache and shared memory are fixed.

Launching a kernel with a different preference than the most recent preference setting may insert a device-side synchronization point.

The supported cache configurations are:

Parameters:
  • kernel (CUkernel) – Kernel to configure cache for

  • config (CUfunc_cache) – Requested cache configuration

  • dev (CUdevice) – Device to set attribute of

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_OUT_OF_MEMORY

Return type:

CUresult

Notes

The API has stricter locking requirements in comparison to its legacy counterpart cuFuncSetCacheConfig() due to device-wide semantics. If multiple threads are trying to set a config on the same device simultaneously, the cache config setting will depend on the interleavings chosen by the OS scheduler and memory consistency.

cuda.bindings.driver.cuKernelGetName(hfunc)

Returns the function name for a CUkernel handle.

Returns in **name the function name associated with the kernel handle hfunc . The function name is returned as a null-terminated string. The returned name is only valid when the kernel handle is valid. If the library is unloaded or reloaded, one must call the API again to get the updated name. This API may return a mangled name if the function is not declared as having C linkage. If either **name or hfunc is NULL, CUDA_ERROR_INVALID_VALUE is returned.

Parameters:

hfunc (CUkernel) – The function handle to retrieve the name for

Returns:

cuda.bindings.driver.cuKernelGetParamInfo(kernel, size_t paramIndex)

Returns the offset and size of a kernel parameter in the device-side parameter layout.

Queries the kernel parameter at paramIndex into kernel’s list of parameters, and returns in paramOffset and paramSize the offset and size, respectively, where the parameter will reside in the device-side parameter layout. This information can be used to update kernel node parameters from the device via cudaGraphKernelNodeSetParam() and cudaGraphKernelNodeUpdatesApply(). paramIndex must be less than the number of parameters that kernel takes. paramSize can be set to NULL if only the parameter offset is desired.

Parameters:
  • kernel (CUkernel) – The kernel to query

  • paramIndex (size_t) – The parameter index to query

Returns:

  • CUresultCUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

  • paramOffset (int) – Returns the offset into the device-side parameter layout at which the parameter resides

  • paramSize (int) – Optionally returns the size of the parameter in the device-side parameter layout

Memory Management

This section describes the memory management functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuMemGetInfo()

Gets free and total memory.

Returns in *total the total amount of memory available to the the current context. Returns in *free the amount of memory on the device that is free according to the OS. CUDA is not guaranteed to be able to allocate all of the memory that the OS reports as free. In a multi- tenet situation, free estimate returned is prone to race condition where a new allocation/free done by a different process or a different thread in the same process between the time when free memory was estimated and reported, will result in deviation in free value reported and actual free memory.

The integrated GPU on Tegra shares memory with CPU and other component of the SoC. The free and total values returned by the API excludes the SWAP memory space maintained by the OS on some platforms. The OS may move some of the memory pages into swap area as the GPU or CPU allocate or access memory. See Tegra app note on how to calculate total and free memory on Tegra.

Returns:

cuda.bindings.driver.cuMemAlloc(size_t bytesize)

Allocates device memory.

Allocates bytesize bytes of linear memory on the device and returns in *dptr a pointer to the allocated memory. The allocated memory is suitably aligned for any kind of variable. The memory is not cleared. If bytesize is 0, cuMemAlloc() returns CUDA_ERROR_INVALID_VALUE.

Parameters:

bytesize (size_t) – Requested allocation size in bytes

Returns:

cuda.bindings.driver.cuMemAllocPitch(size_t WidthInBytes, size_t Height, unsigned int ElementSizeBytes)

Allocates pitched device memory.

Allocates at least WidthInBytes * Height bytes of linear memory on the device and returns in *dptr a pointer to the allocated memory. The function may pad the allocation to ensure that corresponding pointers in any given row will continue to meet the alignment requirements for coalescing as the address is updated from row to row. ElementSizeBytes specifies the size of the largest reads and writes that will be performed on the memory range. ElementSizeBytes may be 4, 8 or 16 (since coalesced memory transactions are not possible on other data sizes). If ElementSizeBytes is smaller than the actual read/write size of a kernel, the kernel will run correctly, but possibly at reduced speed. The pitch returned in *pPitch by cuMemAllocPitch() is the width in bytes of the allocation. The intended usage of pitch is as a separate parameter of the allocation, used to compute addresses within the 2D array. Given the row and column of an array element of type T, the address is computed as:

View CUDA Toolkit Documentation for a C++ code example

The pitch returned by cuMemAllocPitch() is guaranteed to work with cuMemcpy2D() under all circumstances. For allocations of 2D arrays, it is recommended that programmers consider performing pitch allocations using cuMemAllocPitch(). Due to alignment restrictions in the hardware, this is especially true if the application will be performing 2D memory copies between different regions of device memory (whether linear memory or CUDA arrays).

The byte alignment of the pitch returned by cuMemAllocPitch() is guaranteed to match or exceed the alignment requirement for texture binding with cuTexRefSetAddress2D().

Parameters:
  • WidthInBytes (size_t) – Requested allocation width in bytes

  • Height (size_t) – Requested allocation height in rows

  • ElementSizeBytes (unsigned int) – Size of largest reads/writes for range

Returns:

cuda.bindings.driver.cuMemFree(dptr)

Frees device memory.

Frees the memory space pointed to by dptr, which must have been returned by a previous call to one of the following memory allocation APIs - cuMemAlloc(), cuMemAllocPitch(), cuMemAllocManaged(), cuMemAllocAsync(), cuMemAllocFromPoolAsync()

Note - This API will not perform any implict synchronization when the pointer was allocated with cuMemAllocAsync or cuMemAllocFromPoolAsync. Callers must ensure that all accesses to these pointer have completed before invoking cuMemFree. For best performance and memory reuse, users should use cuMemFreeAsync to free memory allocated via the stream ordered memory allocator. For all other pointers, this API may perform implicit synchronization.

Parameters:

dptr (CUdeviceptr) – Pointer to memory to free

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemGetAddressRange(dptr)

Get information on memory allocations.

Returns the base address in *pbase and size in *psize of the allocation by cuMemAlloc() or cuMemAllocPitch() that contains the input pointer dptr. Both parameters pbase and psize are optional. If one of them is NULL, it is ignored.

Parameters:

dptr (CUdeviceptr) – Device pointer to query

Returns:

cuda.bindings.driver.cuMemAllocHost(size_t bytesize)

Allocates page-locked host memory.

Allocates bytesize bytes of host memory that is page-locked and accessible to the device. The driver tracks the virtual memory ranges allocated with this function and automatically accelerates calls to functions such as cuMemcpy(). Since the memory can be accessed directly by the device, it can be read or written with much higher bandwidth than pageable memory obtained with functions such as malloc().

On systems where CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES is true, cuMemAllocHost may not page-lock the allocated memory.

Page-locking excessive amounts of memory with cuMemAllocHost() may degrade system performance, since it reduces the amount of memory available to the system for paging. As a result, this function is best used sparingly to allocate staging areas for data exchange between host and device.

Note all host memory allocated using cuMemAllocHost() will automatically be immediately accessible to all contexts on all devices which support unified addressing (as may be queried using CU_DEVICE_ATTRIBUTE_UNIFIED_ADDRESSING). The device pointer that may be used to access this host memory from those contexts is always equal to the returned host pointer *pp. See Unified Addressing for additional details.

Parameters:

bytesize (size_t) – Requested allocation size in bytes

Returns:

cuda.bindings.driver.cuMemFreeHost(p)

Frees page-locked host memory.

Frees the memory space pointed to by p, which must have been returned by a previous call to cuMemAllocHost().

Parameters:

p (Any) – Pointer to memory to free

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemHostAlloc(size_t bytesize, unsigned int Flags)

Allocates page-locked host memory.

Allocates bytesize bytes of host memory that is page-locked and accessible to the device. The driver tracks the virtual memory ranges allocated with this function and automatically accelerates calls to functions such as cuMemcpyHtoD(). Since the memory can be accessed directly by the device, it can be read or written with much higher bandwidth than pageable memory obtained with functions such as malloc().

On systems where CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES is true, cuMemHostAlloc may not page-lock the allocated memory.

Page-locking excessive amounts of memory may degrade system performance, since it reduces the amount of memory available to the system for paging. As a result, this function is best used sparingly to allocate staging areas for data exchange between host and device.

The Flags parameter enables different options to be specified that affect the allocation, as follows.

  • CU_MEMHOSTALLOC_PORTABLE: The memory returned by this call will be considered as pinned memory by all CUDA contexts, not just the one that performed the allocation.

  • CU_MEMHOSTALLOC_DEVICEMAP: Maps the allocation into the CUDA address space. The device pointer to the memory may be obtained by calling cuMemHostGetDevicePointer().

  • CU_MEMHOSTALLOC_WRITECOMBINED: Allocates the memory as write-combined (WC). WC memory can be transferred across the PCI Express bus more quickly on some system configurations, but cannot be read efficiently by most CPUs. WC memory is a good option for buffers that will be written by the CPU and read by the GPU via mapped pinned memory or host->device transfers.

All of these flags are orthogonal to one another: a developer may allocate memory that is portable, mapped and/or write-combined with no restrictions.

The CU_MEMHOSTALLOC_DEVICEMAP flag may be specified on CUDA contexts for devices that do not support mapped pinned memory. The failure is deferred to cuMemHostGetDevicePointer() because the memory may be mapped into other CUDA contexts via the CU_MEMHOSTALLOC_PORTABLE flag.

The memory allocated by this function must be freed with cuMemFreeHost().

Note all host memory allocated using cuMemHostAlloc() will automatically be immediately accessible to all contexts on all devices which support unified addressing (as may be queried using CU_DEVICE_ATTRIBUTE_UNIFIED_ADDRESSING). Unless the flag CU_MEMHOSTALLOC_WRITECOMBINED is specified, the device pointer that may be used to access this host memory from those contexts is always equal to the returned host pointer *pp. If the flag CU_MEMHOSTALLOC_WRITECOMBINED is specified, then the function cuMemHostGetDevicePointer() must be used to query the device pointer, even if the context supports unified addressing. See Unified Addressing for additional details.

Parameters:
  • bytesize (size_t) – Requested allocation size in bytes

  • Flags (unsigned int) – Flags for allocation request

Returns:

cuda.bindings.driver.cuMemHostGetDevicePointer(p, unsigned int Flags)

Passes back device pointer of mapped pinned memory.

Passes back the device pointer pdptr corresponding to the mapped, pinned host buffer p allocated by cuMemHostAlloc.

cuMemHostGetDevicePointer() will fail if the CU_MEMHOSTALLOC_DEVICEMAP flag was not specified at the time the memory was allocated, or if the function is called on a GPU that does not support mapped pinned memory.

For devices that have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_CAN_USE_HOST_POINTER_FOR_REGISTERED_MEM, the memory can also be accessed from the device using the host pointer p. The device pointer returned by cuMemHostGetDevicePointer() may or may not match the original host pointer p and depends on the devices visible to the application. If all devices visible to the application have a non-zero value for the device attribute, the device pointer returned by cuMemHostGetDevicePointer() will match the original pointer p. If any device visible to the application has a zero value for the device attribute, the device pointer returned by cuMemHostGetDevicePointer() will not match the original host pointer p, but it will be suitable for use on all devices provided Unified Virtual Addressing is enabled. In such systems, it is valid to access the memory using either pointer on devices that have a non-zero value for the device attribute. Note however that such devices should access the memory using only one of the two pointers and not both.

Flags provides for future releases. For now, it must be set to 0.

Parameters:
  • p (Any) – Host pointer

  • Flags (unsigned int) – Options (must be 0)

Returns:

cuda.bindings.driver.cuMemHostGetFlags(p)

Passes back flags that were used for a pinned allocation.

Passes back the flags pFlags that were specified when allocating the pinned host buffer p allocated by cuMemHostAlloc.

cuMemHostGetFlags() will fail if the pointer does not reside in an allocation performed by cuMemAllocHost() or cuMemHostAlloc().

Parameters:

p (Any) – Host pointer

Returns:

cuda.bindings.driver.cuMemAllocManaged(size_t bytesize, unsigned int flags)

Allocates memory that will be automatically managed by the Unified Memory system.

Allocates bytesize bytes of managed memory on the device and returns in *dptr a pointer to the allocated memory. If the device doesn’t support allocating managed memory, CUDA_ERROR_NOT_SUPPORTED is returned. Support for managed memory can be queried using the device attribute CU_DEVICE_ATTRIBUTE_MANAGED_MEMORY. The allocated memory is suitably aligned for any kind of variable. The memory is not cleared. If bytesize is 0, cuMemAllocManaged returns CUDA_ERROR_INVALID_VALUE. The pointer is valid on the CPU and on all GPUs in the system that support managed memory. All accesses to this pointer must obey the Unified Memory programming model.

flags specifies the default stream association for this allocation. flags must be one of CU_MEM_ATTACH_GLOBAL or CU_MEM_ATTACH_HOST. If CU_MEM_ATTACH_GLOBAL is specified, then this memory is accessible from any stream on any device. If CU_MEM_ATTACH_HOST is specified, then the allocation should not be accessed from devices that have a zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS; an explicit call to cuStreamAttachMemAsync will be required to enable access on such devices.

If the association is later changed via cuStreamAttachMemAsync to a single stream, the default association as specified during cuMemAllocManaged is restored when that stream is destroyed. For managed variables, the default association is always CU_MEM_ATTACH_GLOBAL. Note that destroying a stream is an asynchronous operation, and as a result, the change to default association won’t happen until all work in the stream has completed.

Memory allocated with cuMemAllocManaged should be released with cuMemFree.

Device memory oversubscription is possible for GPUs that have a non- zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. Managed memory on such GPUs may be evicted from device memory to host memory at any time by the Unified Memory driver in order to make room for other allocations.

In a system where all GPUs have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS, managed memory may not be populated when this API returns and instead may be populated on access. In such systems, managed memory can migrate to any processor’s memory at any time. The Unified Memory driver will employ heuristics to maintain data locality and prevent excessive page faults to the extent possible. The application can also guide the driver about memory usage patterns via cuMemAdvise. The application can also explicitly migrate memory to a desired processor’s memory via cuMemPrefetchAsync.

In a multi-GPU system where all of the GPUs have a zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS and all the GPUs have peer-to-peer support with each other, the physical storage for managed memory is created on the GPU which is active at the time cuMemAllocManaged is called. All other GPUs will reference the data at reduced bandwidth via peer mappings over the PCIe bus. The Unified Memory driver does not migrate memory among such GPUs.

In a multi-GPU system where not all GPUs have peer-to-peer support with each other and where the value of the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS is zero for at least one of those GPUs, the location chosen for physical storage of managed memory is system-dependent.

  • On Linux, the location chosen will be device memory as long as the current set of active contexts are on devices that either have peer- to-peer support with each other or have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. If there is an active context on a GPU that does not have a non-zero value for that device attribute and it does not have peer-to-peer support with the other devices that have active contexts on them, then the location for physical storage will be ‘zero-copy’ or host memory. Note that this means that managed memory that is located in device memory is migrated to host memory if a new context is created on a GPU that doesn’t have a non-zero value for the device attribute and does not support peer-to-peer with at least one of the other devices that has an active context. This in turn implies that context creation may fail if there is insufficient host memory to migrate all managed allocations.

  • On Windows, the physical storage is always created in ‘zero-copy’ or host memory. All GPUs will reference the data at reduced bandwidth over the PCIe bus. In these circumstances, use of the environment variable CUDA_VISIBLE_DEVICES is recommended to restrict CUDA to only use those GPUs that have peer-to-peer support. Alternatively, users can also set CUDA_MANAGED_FORCE_DEVICE_ALLOC to a non-zero value to force the driver to always use device memory for physical storage. When this environment variable is set to a non-zero value, all contexts created in that process on devices that support managed memory have to be peer-to-peer compatible with each other. Context creation will fail if a context is created on a device that supports managed memory and is not peer-to-peer compatible with any of the other managed memory supporting devices on which contexts were previously created, even if those contexts have been destroyed. These environment variables are described in the CUDA programming guide under the “CUDA environment variables” section.

  • On ARM, managed memory is not available on discrete gpu with Drive PX-2.

Parameters:
Returns:

cuda.bindings.driver.cuDeviceRegisterAsyncNotification(device, callbackFunc, userData)

Registers a callback function to receive async notifications.

Registers callbackFunc to receive async notifications.

The userData parameter is passed to the callback function at async notification time. Likewise, callback is also passed to the callback function to distinguish between multiple registered callbacks.

The callback function being registered should be designed to return quickly (~10ms). Any long running tasks should be queued for execution on an application thread.

Callbacks may not call cuDeviceRegisterAsyncNotification or cuDeviceUnregisterAsyncNotification. Doing so will result in CUDA_ERROR_NOT_PERMITTED. Async notification callbacks execute in an undefined order and may be serialized.

Returns in *callback a handle representing the registered callback instance.

Parameters:
  • device (CUdevice) – The device on which to register the callback

  • callbackFunc (CUasyncCallback) – The function to register as a callback

  • userData (Any) – A generic pointer to user data. This is passed into the callback function.

Returns:

cuda.bindings.driver.cuDeviceUnregisterAsyncNotification(device, callback)

Unregisters an async notification callback.

Unregisters callback so that the corresponding callback function will stop receiving async notifications.

Parameters:
  • device (CUdevice) – The device from which to remove callback.

  • callback (CUasyncCallbackHandle) – The callback instance to unregister from receiving async notifications.

Returns:

CUDA_SUCCESS CUDA_ERROR_NOT_SUPPORTED CUDA_ERROR_INVALID_DEVICE CUDA_ERROR_INVALID_VALUE CUDA_ERROR_NOT_PERMITTED CUDA_ERROR_UNKNOWN

Return type:

CUresult

cuda.bindings.driver.cuDeviceGetByPCIBusId(char *pciBusId)

Returns a handle to a compute device.

Returns in *device a device handle given a PCI bus ID string.

where domain, bus, device, and function are all hexadecimal values

Parameters:

pciBusId (bytes) – String in one of the following forms:

Returns:

cuda.bindings.driver.cuDeviceGetPCIBusId(int length, dev)

Returns a PCI Bus Id string for the device.

Returns an ASCII string identifying the device dev in the NULL- terminated string pointed to by pciBusId. length specifies the maximum length of the string that may be returned.

where domain, bus, device, and function are all hexadecimal values. pciBusId should be large enough to store 13 characters including the NULL-terminator.

Parameters:
  • length (int) – Maximum length of string to store in name

  • dev (CUdevice) – Device to get identifier string for

Returns:

cuda.bindings.driver.cuIpcGetEventHandle(event)

Gets an interprocess handle for a previously allocated event.

Takes as input a previously allocated event. This event must have been created with the CU_EVENT_INTERPROCESS and CU_EVENT_DISABLE_TIMING flags set. This opaque handle may be copied into other processes and opened with cuIpcOpenEventHandle to allow efficient hardware synchronization between GPU work in different processes.

After the event has been opened in the importing process, cuEventRecord, cuEventSynchronize, cuStreamWaitEvent and cuEventQuery may be used in either process. Performing operations on the imported event after the exported event has been freed with cuEventDestroy will result in undefined behavior.

IPC functionality is restricted to devices with support for unified addressing on Linux and Windows operating systems. IPC functionality on Windows is supported for compatibility purposes but not recommended as it comes with performance cost. Users can test their device for IPC functionality by calling cuapiDeviceGetAttribute with CU_DEVICE_ATTRIBUTE_IPC_EVENT_SUPPORTED

Parameters:

event (CUevent or cudaEvent_t) – Event allocated with CU_EVENT_INTERPROCESS and CU_EVENT_DISABLE_TIMING flags.

Returns:

cuda.bindings.driver.cuIpcOpenEventHandle(CUipcEventHandle handle: CUipcEventHandle)

Opens an interprocess event handle for use in the current process.

Opens an interprocess event handle exported from another process with cuIpcGetEventHandle. This function returns a CUevent that behaves like a locally created event with the CU_EVENT_DISABLE_TIMING flag specified. This event must be freed with cuEventDestroy.

Performing operations on the imported event after the exported event has been freed with cuEventDestroy will result in undefined behavior.

IPC functionality is restricted to devices with support for unified addressing on Linux and Windows operating systems. IPC functionality on Windows is supported for compatibility purposes but not recommended as it comes with performance cost. Users can test their device for IPC functionality by calling cuapiDeviceGetAttribute with CU_DEVICE_ATTRIBUTE_IPC_EVENT_SUPPORTED

Parameters:

handle (CUipcEventHandle) – Interprocess handle to open

Returns:

cuda.bindings.driver.cuIpcGetMemHandle(dptr)

Gets an interprocess memory handle for an existing device memory allocation.

Takes a pointer to the base of an existing device memory allocation created with cuMemAlloc and exports it for use in another process. This is a lightweight operation and may be called multiple times on an allocation without adverse effects.

If a region of memory is freed with cuMemFree and a subsequent call to cuMemAlloc returns memory with the same device address, cuIpcGetMemHandle will return a unique handle for the new memory.

IPC functionality is restricted to devices with support for unified addressing on Linux and Windows operating systems. IPC functionality on Windows is supported for compatibility purposes but not recommended as it comes with performance cost. Users can test their device for IPC functionality by calling cuapiDeviceGetAttribute with CU_DEVICE_ATTRIBUTE_IPC_EVENT_SUPPORTED

Parameters:

dptr (CUdeviceptr) – Base pointer to previously allocated device memory

Returns:

cuda.bindings.driver.cuIpcOpenMemHandle(CUipcMemHandle handle: CUipcMemHandle, unsigned int Flags)

Opens an interprocess memory handle exported from another process and returns a device pointer usable in the local process.

Maps memory exported from another process with cuIpcGetMemHandle into the current device address space. For contexts on different devices cuIpcOpenMemHandle can attempt to enable peer access between the devices as if the user called cuCtxEnablePeerAccess. This behavior is controlled by the CU_IPC_MEM_LAZY_ENABLE_PEER_ACCESS flag. cuDeviceCanAccessPeer can determine if a mapping is possible.

Contexts that may open CUipcMemHandles are restricted in the following way. CUipcMemHandles from each CUdevice in a given process may only be opened by one CUcontext per CUdevice per other process.

If the memory handle has already been opened by the current context, the reference count on the handle is incremented by 1 and the existing device pointer is returned.

Memory returned from cuIpcOpenMemHandle must be freed with cuIpcCloseMemHandle.

Calling cuMemFree on an exported memory region before calling cuIpcCloseMemHandle in the importing context will result in undefined behavior.

IPC functionality is restricted to devices with support for unified addressing on Linux and Windows operating systems. IPC functionality on Windows is supported for compatibility purposes but not recommended as it comes with performance cost. Users can test their device for IPC functionality by calling cuapiDeviceGetAttribute with CU_DEVICE_ATTRIBUTE_IPC_EVENT_SUPPORTED

Parameters:
Returns:

Notes

No guarantees are made about the address returned in *pdptr. In particular, multiple processes may not receive the same address for the same handle.

cuda.bindings.driver.cuIpcCloseMemHandle(dptr)

Attempts to close memory mapped with cuIpcOpenMemHandle.

Decrements the reference count of the memory returned by cuIpcOpenMemHandle by 1. When the reference count reaches 0, this API unmaps the memory. The original allocation in the exporting process as well as imported mappings in other processes will be unaffected.

Any resources used to enable peer access will be freed if this is the last mapping using them.

IPC functionality is restricted to devices with support for unified addressing on Linux and Windows operating systems. IPC functionality on Windows is supported for compatibility purposes but not recommended as it comes with performance cost. Users can test their device for IPC functionality by calling cuapiDeviceGetAttribute with CU_DEVICE_ATTRIBUTE_IPC_EVENT_SUPPORTED

Parameters:

dptr (CUdeviceptr) – Device pointer returned by cuIpcOpenMemHandle

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_MAP_FAILED, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemHostRegister(p, size_t bytesize, unsigned int Flags)

Registers an existing host memory range for use by CUDA.

Page-locks the memory range specified by p and bytesize and maps it for the device(s) as specified by Flags. This memory range also is added to the same tracking mechanism as cuMemHostAlloc to automatically accelerate calls to functions such as cuMemcpyHtoD(). Since the memory can be accessed directly by the device, it can be read or written with much higher bandwidth than pageable memory that has not been registered. Page-locking excessive amounts of memory may degrade system performance, since it reduces the amount of memory available to the system for paging. As a result, this function is best used sparingly to register staging areas for data exchange between host and device.

On systems where CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES is true, cuMemHostRegister will not page-lock the memory range specified by ptr but only populate unpopulated pages.

The Flags parameter enables different options to be specified that affect the allocation, as follows.

All of these flags are orthogonal to one another: a developer may page- lock memory that is portable or mapped with no restrictions.

The CU_MEMHOSTREGISTER_DEVICEMAP flag may be specified on CUDA contexts for devices that do not support mapped pinned memory. The failure is deferred to cuMemHostGetDevicePointer() because the memory may be mapped into other CUDA contexts via the CU_MEMHOSTREGISTER_PORTABLE flag.

For devices that have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_CAN_USE_HOST_POINTER_FOR_REGISTERED_MEM, the memory can also be accessed from the device using the host pointer p. The device pointer returned by cuMemHostGetDevicePointer() may or may not match the original host pointer ptr and depends on the devices visible to the application. If all devices visible to the application have a non-zero value for the device attribute, the device pointer returned by cuMemHostGetDevicePointer() will match the original pointer ptr. If any device visible to the application has a zero value for the device attribute, the device pointer returned by cuMemHostGetDevicePointer() will not match the original host pointer ptr, but it will be suitable for use on all devices provided Unified Virtual Addressing is enabled. In such systems, it is valid to access the memory using either pointer on devices that have a non-zero value for the device attribute. Note however that such devices should access the memory using only of the two pointers and not both.

The memory page-locked by this function must be unregistered with cuMemHostUnregister().

Parameters:
  • p (Any) – Host pointer to memory to page-lock

  • bytesize (size_t) – Size in bytes of the address range to page-lock

  • Flags (unsigned int) – Flags for allocation request

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_HOST_MEMORY_ALREADY_REGISTERED, CUDA_ERROR_NOT_PERMITTED, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuMemHostUnregister(p)

Unregisters a memory range that was registered with cuMemHostRegister.

Unmaps the memory range whose base address is specified by p, and makes it pageable again.

The base address must be the same one specified to cuMemHostRegister().

Parameters:

p (Any) – Host pointer to memory to unregister

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_HOST_MEMORY_NOT_REGISTERED,

Return type:

CUresult

cuda.bindings.driver.cuMemcpy(dst, src, size_t ByteCount)

Copies memory.

Copies data between two pointers. dst and src are base pointers of the destination and source, respectively. ByteCount specifies the number of bytes to copy. Note that this function infers the type of the transfer (host to host, host to device, device to device, or device to host) from the pointer values. This function is only allowed in contexts which support unified addressing.

Parameters:
  • dst (CUdeviceptr) – Destination unified virtual address space pointer

  • src (CUdeviceptr) – Source unified virtual address space pointer

  • ByteCount (size_t) – Size of memory copy in bytes

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyPeer(dstDevice, dstContext, srcDevice, srcContext, size_t ByteCount)

Copies device memory between two contexts.

Copies from device memory in one context to device memory in another context. dstDevice is the base device pointer of the destination memory and dstContext is the destination context. srcDevice is the base device pointer of the source memory and srcContext is the source pointer. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • dstContext (CUcontext) – Destination context

  • srcDevice (CUdeviceptr) – Source device pointer

  • srcContext (CUcontext) – Source context

  • ByteCount (size_t) – Size of memory copy in bytes

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyHtoD(dstDevice, srcHost, size_t ByteCount)

Copies memory from Host to Device.

Copies from host memory to device memory. dstDevice and srcHost are the base addresses of the destination and source, respectively. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • srcHost (Any) – Source host pointer

  • ByteCount (size_t) – Size of memory copy in bytes

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyDtoH(dstHost, srcDevice, size_t ByteCount)

Copies memory from Device to Host.

Copies from device to host memory. dstHost and srcDevice specify the base pointers of the destination and source, respectively. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstHost (Any) – Destination host pointer

  • srcDevice (CUdeviceptr) – Source device pointer

  • ByteCount (size_t) – Size of memory copy in bytes

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyDtoD(dstDevice, srcDevice, size_t ByteCount)

Copies memory from Device to Device.

Copies from device memory to device memory. dstDevice and srcDevice are the base pointers of the destination and source, respectively. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • srcDevice (CUdeviceptr) – Source device pointer

  • ByteCount (size_t) – Size of memory copy in bytes

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyDtoA(dstArray, size_t dstOffset, srcDevice, size_t ByteCount)

Copies memory from Device to Array.

Copies from device memory to a 1D CUDA array. dstArray and dstOffset specify the CUDA array handle and starting index of the destination data. srcDevice specifies the base pointer of the source. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstArray (CUarray) – Destination array

  • dstOffset (size_t) – Offset in bytes of destination array

  • srcDevice (CUdeviceptr) – Source device pointer

  • ByteCount (size_t) – Size of memory copy in bytes

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyAtoD(dstDevice, srcArray, size_t srcOffset, size_t ByteCount)

Copies memory from Array to Device.

Copies from one 1D CUDA array to device memory. dstDevice specifies the base pointer of the destination and must be naturally aligned with the CUDA array elements. srcArray and srcOffset specify the CUDA array handle and the offset in bytes into the array where the copy is to begin. ByteCount specifies the number of bytes to copy and must be evenly divisible by the array element size.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • srcArray (CUarray) – Source array

  • srcOffset (size_t) – Offset in bytes of source array

  • ByteCount (size_t) – Size of memory copy in bytes

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyHtoA(dstArray, size_t dstOffset, srcHost, size_t ByteCount)

Copies memory from Host to Array.

Copies from host memory to a 1D CUDA array. dstArray and dstOffset specify the CUDA array handle and starting offset in bytes of the destination data. pSrc specifies the base address of the source. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstArray (CUarray) – Destination array

  • dstOffset (size_t) – Offset in bytes of destination array

  • srcHost (Any) – Source host pointer

  • ByteCount (size_t) – Size of memory copy in bytes

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyAtoH(dstHost, srcArray, size_t srcOffset, size_t ByteCount)

Copies memory from Array to Host.

Copies from one 1D CUDA array to host memory. dstHost specifies the base pointer of the destination. srcArray and srcOffset specify the CUDA array handle and starting offset in bytes of the source data. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstHost (Any) – Destination device pointer

  • srcArray (CUarray) – Source array

  • srcOffset (size_t) – Offset in bytes of source array

  • ByteCount (size_t) – Size of memory copy in bytes

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyAtoA(dstArray, size_t dstOffset, srcArray, size_t srcOffset, size_t ByteCount)

Copies memory from Array to Array.

Copies from one 1D CUDA array to another. dstArray and srcArray specify the handles of the destination and source CUDA arrays for the copy, respectively. dstOffset and srcOffset specify the destination and source offsets in bytes into the CUDA arrays. ByteCount is the number of bytes to be copied. The size of the elements in the CUDA arrays need not be the same format, but the elements must be the same size; and count must be evenly divisible by that size.

Parameters:
  • dstArray (CUarray) – Destination array

  • dstOffset (size_t) – Offset in bytes of destination array

  • srcArray (CUarray) – Source array

  • srcOffset (size_t) – Offset in bytes of source array

  • ByteCount (size_t) – Size of memory copy in bytes

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpy2D(CUDA_MEMCPY2D pCopy: Optional[CUDA_MEMCPY2D])

Copies memory for 2D arrays.

Perform a 2D memory copy according to the parameters specified in pCopy. The CUDA_MEMCPY2D structure is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • srcMemoryType and dstMemoryType specify the type of memory of the source and destination, respectively; CUmemorytype_enum is defined as:

View CUDA Toolkit Documentation for a C++ code example

If srcMemoryType is CU_MEMORYTYPE_UNIFIED, srcDevice and srcPitch specify the (unified virtual address space) base address of the source data and the bytes per row to apply. srcArray is ignored. This value may be used only if unified addressing is supported in the calling context.

If srcMemoryType is CU_MEMORYTYPE_HOST, srcHost and srcPitch specify the (host) base address of the source data and the bytes per row to apply. srcArray is ignored.

If srcMemoryType is CU_MEMORYTYPE_DEVICE, srcDevice and srcPitch specify the (device) base address of the source data and the bytes per row to apply. srcArray is ignored.

If srcMemoryType is CU_MEMORYTYPE_ARRAY, srcArray specifies the handle of the source data. srcHost, srcDevice and srcPitch are ignored.

If dstMemoryType is CU_MEMORYTYPE_HOST, dstHost and dstPitch specify the (host) base address of the destination data and the bytes per row to apply. dstArray is ignored.

If dstMemoryType is CU_MEMORYTYPE_UNIFIED, dstDevice and dstPitch specify the (unified virtual address space) base address of the source data and the bytes per row to apply. dstArray is ignored. This value may be used only if unified addressing is supported in the calling context.

If dstMemoryType is CU_MEMORYTYPE_DEVICE, dstDevice and dstPitch specify the (device) base address of the destination data and the bytes per row to apply. dstArray is ignored.

If dstMemoryType is CU_MEMORYTYPE_ARRAY, dstArray specifies the handle of the destination data. dstHost, dstDevice and dstPitch are ignored.

  • srcXInBytes and srcY specify the base address of the source data for the copy.

For host pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For device pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For CUDA arrays, srcXInBytes must be evenly divisible by the array element size.

  • dstXInBytes and dstY specify the base address of the destination data for the copy.

For host pointers, the base address is

View CUDA Toolkit Documentation for a C++ code example

For device pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For CUDA arrays, dstXInBytes must be evenly divisible by the array element size.

cuMemcpy2D() returns an error if any pitch is greater than the maximum allowed (CU_DEVICE_ATTRIBUTE_MAX_PITCH). cuMemAllocPitch() passes back pitches that always work with cuMemcpy2D(). On intra-device memory copies (device to device, CUDA array to device, CUDA array to CUDA array), cuMemcpy2D() may fail for pitches not computed by cuMemAllocPitch(). cuMemcpy2DUnaligned() does not have this restriction, but may run significantly slower in the cases where cuMemcpy2D() would have returned an error code.

Parameters:

pCopy (CUDA_MEMCPY2D) – Parameters for the memory copy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpy2DUnaligned(CUDA_MEMCPY2D pCopy: Optional[CUDA_MEMCPY2D])

Copies memory for 2D arrays.

Perform a 2D memory copy according to the parameters specified in pCopy. The CUDA_MEMCPY2D structure is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • srcMemoryType and dstMemoryType specify the type of memory of the source and destination, respectively; CUmemorytype_enum is defined as:

View CUDA Toolkit Documentation for a C++ code example

If srcMemoryType is CU_MEMORYTYPE_UNIFIED, srcDevice and srcPitch specify the (unified virtual address space) base address of the source data and the bytes per row to apply. srcArray is ignored. This value may be used only if unified addressing is supported in the calling context.

If srcMemoryType is CU_MEMORYTYPE_HOST, srcHost and srcPitch specify the (host) base address of the source data and the bytes per row to apply. srcArray is ignored.

If srcMemoryType is CU_MEMORYTYPE_DEVICE, srcDevice and srcPitch specify the (device) base address of the source data and the bytes per row to apply. srcArray is ignored.

If srcMemoryType is CU_MEMORYTYPE_ARRAY, srcArray specifies the handle of the source data. srcHost, srcDevice and srcPitch are ignored.

If dstMemoryType is CU_MEMORYTYPE_UNIFIED, dstDevice and dstPitch specify the (unified virtual address space) base address of the source data and the bytes per row to apply. dstArray is ignored. This value may be used only if unified addressing is supported in the calling context.

If dstMemoryType is CU_MEMORYTYPE_HOST, dstHost and dstPitch specify the (host) base address of the destination data and the bytes per row to apply. dstArray is ignored.

If dstMemoryType is CU_MEMORYTYPE_DEVICE, dstDevice and dstPitch specify the (device) base address of the destination data and the bytes per row to apply. dstArray is ignored.

If dstMemoryType is CU_MEMORYTYPE_ARRAY, dstArray specifies the handle of the destination data. dstHost, dstDevice and dstPitch are ignored.

  • srcXInBytes and srcY specify the base address of the source data for the copy.

For host pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For device pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For CUDA arrays, srcXInBytes must be evenly divisible by the array element size.

  • dstXInBytes and dstY specify the base address of the destination data for the copy.

For host pointers, the base address is

View CUDA Toolkit Documentation for a C++ code example

For device pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For CUDA arrays, dstXInBytes must be evenly divisible by the array element size.

cuMemcpy2D() returns an error if any pitch is greater than the maximum allowed (CU_DEVICE_ATTRIBUTE_MAX_PITCH). cuMemAllocPitch() passes back pitches that always work with cuMemcpy2D(). On intra-device memory copies (device to device, CUDA array to device, CUDA array to CUDA array), cuMemcpy2D() may fail for pitches not computed by cuMemAllocPitch(). cuMemcpy2DUnaligned() does not have this restriction, but may run significantly slower in the cases where cuMemcpy2D() would have returned an error code.

Parameters:

pCopy (CUDA_MEMCPY2D) – Parameters for the memory copy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpy3D(CUDA_MEMCPY3D pCopy: Optional[CUDA_MEMCPY3D])

Copies memory for 3D arrays.

Perform a 3D memory copy according to the parameters specified in pCopy. The CUDA_MEMCPY3D structure is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • srcMemoryType and dstMemoryType specify the type of memory of the source and destination, respectively; CUmemorytype_enum is defined as:

View CUDA Toolkit Documentation for a C++ code example

If srcMemoryType is CU_MEMORYTYPE_UNIFIED, srcDevice and srcPitch specify the (unified virtual address space) base address of the source data and the bytes per row to apply. srcArray is ignored. This value may be used only if unified addressing is supported in the calling context.

If srcMemoryType is CU_MEMORYTYPE_HOST, srcHost, srcPitch and srcHeight specify the (host) base address of the source data, the bytes per row, and the height of each 2D slice of the 3D array. srcArray is ignored.

If srcMemoryType is CU_MEMORYTYPE_DEVICE, srcDevice, srcPitch and srcHeight specify the (device) base address of the source data, the bytes per row, and the height of each 2D slice of the 3D array. srcArray is ignored.

If srcMemoryType is CU_MEMORYTYPE_ARRAY, srcArray specifies the handle of the source data. srcHost, srcDevice, srcPitch and srcHeight are ignored.

If dstMemoryType is CU_MEMORYTYPE_UNIFIED, dstDevice and dstPitch specify the (unified virtual address space) base address of the source data and the bytes per row to apply. dstArray is ignored. This value may be used only if unified addressing is supported in the calling context.

If dstMemoryType is CU_MEMORYTYPE_HOST, dstHost and dstPitch specify the (host) base address of the destination data, the bytes per row, and the height of each 2D slice of the 3D array. dstArray is ignored.

If dstMemoryType is CU_MEMORYTYPE_DEVICE, dstDevice and dstPitch specify the (device) base address of the destination data, the bytes per row, and the height of each 2D slice of the 3D array. dstArray is ignored.

If dstMemoryType is CU_MEMORYTYPE_ARRAY, dstArray specifies the handle of the destination data. dstHost, dstDevice, dstPitch and dstHeight are ignored.

For host pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For device pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For CUDA arrays, srcXInBytes must be evenly divisible by the array element size.

  • dstXInBytes, dstY and dstZ specify the base address of the destination data for the copy.

For host pointers, the base address is

View CUDA Toolkit Documentation for a C++ code example

For device pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For CUDA arrays, dstXInBytes must be evenly divisible by the array element size.

cuMemcpy3D() returns an error if any pitch is greater than the maximum allowed (CU_DEVICE_ATTRIBUTE_MAX_PITCH).

The srcLOD and dstLOD members of the CUDA_MEMCPY3D structure must be set to 0.

Parameters:

pCopy (CUDA_MEMCPY3D) – Parameters for the memory copy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpy3DPeer(CUDA_MEMCPY3D_PEER pCopy: Optional[CUDA_MEMCPY3D_PEER])

Copies memory between contexts.

Perform a 3D memory copy according to the parameters specified in pCopy. See the definition of the CUDA_MEMCPY3D_PEER structure for documentation of its parameters.

Parameters:

pCopy (CUDA_MEMCPY3D_PEER) – Parameters for the memory copy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyAsync(dst, src, size_t ByteCount, hStream)

Copies memory asynchronously.

Copies data between two pointers. dst and src are base pointers of the destination and source, respectively. ByteCount specifies the number of bytes to copy. Note that this function infers the type of the transfer (host to host, host to device, device to device, or device to host) from the pointer values. This function is only allowed in contexts which support unified addressing.

Parameters:
  • dst (CUdeviceptr) – Destination unified virtual address space pointer

  • src (CUdeviceptr) – Source unified virtual address space pointer

  • ByteCount (size_t) – Size of memory copy in bytes

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyPeerAsync(dstDevice, dstContext, srcDevice, srcContext, size_t ByteCount, hStream)

Copies device memory between two contexts asynchronously.

Copies from device memory in one context to device memory in another context. dstDevice is the base device pointer of the destination memory and dstContext is the destination context. srcDevice is the base device pointer of the source memory and srcContext is the source pointer. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • dstContext (CUcontext) – Destination context

  • srcDevice (CUdeviceptr) – Source device pointer

  • srcContext (CUcontext) – Source context

  • ByteCount (size_t) – Size of memory copy in bytes

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyHtoDAsync(dstDevice, srcHost, size_t ByteCount, hStream)

Copies memory from Host to Device.

Copies from host memory to device memory. dstDevice and srcHost are the base addresses of the destination and source, respectively. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • srcHost (Any) – Source host pointer

  • ByteCount (size_t) – Size of memory copy in bytes

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyDtoHAsync(dstHost, srcDevice, size_t ByteCount, hStream)

Copies memory from Device to Host.

Copies from device to host memory. dstHost and srcDevice specify the base pointers of the destination and source, respectively. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstHost (Any) – Destination host pointer

  • srcDevice (CUdeviceptr) – Source device pointer

  • ByteCount (size_t) – Size of memory copy in bytes

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyDtoDAsync(dstDevice, srcDevice, size_t ByteCount, hStream)

Copies memory from Device to Device.

Copies from device memory to device memory. dstDevice and srcDevice are the base pointers of the destination and source, respectively. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • srcDevice (CUdeviceptr) – Source device pointer

  • ByteCount (size_t) – Size of memory copy in bytes

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyHtoAAsync(dstArray, size_t dstOffset, srcHost, size_t ByteCount, hStream)

Copies memory from Host to Array.

Copies from host memory to a 1D CUDA array. dstArray and dstOffset specify the CUDA array handle and starting offset in bytes of the destination data. srcHost specifies the base address of the source. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstArray (CUarray) – Destination array

  • dstOffset (size_t) – Offset in bytes of destination array

  • srcHost (Any) – Source host pointer

  • ByteCount (size_t) – Size of memory copy in bytes

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuMemcpyAtoHAsync(dstHost, srcArray, size_t srcOffset, size_t ByteCount, hStream)

Copies memory from Array to Host.

Copies from one 1D CUDA array to host memory. dstHost specifies the base pointer of the destination. srcArray and srcOffset specify the CUDA array handle and starting offset in bytes of the source data. ByteCount specifies the number of bytes to copy.

Parameters:
  • dstHost (Any) – Destination pointer

  • srcArray (CUarray) – Source array

  • srcOffset (size_t) – Offset in bytes of source array

  • ByteCount (size_t) – Size of memory copy in bytes

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuMemcpy2DAsync(CUDA_MEMCPY2D pCopy: Optional[CUDA_MEMCPY2D], hStream)

Copies memory for 2D arrays.

Perform a 2D memory copy according to the parameters specified in pCopy. The CUDA_MEMCPY2D structure is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • srcMemoryType and dstMemoryType specify the type of memory of the source and destination, respectively; CUmemorytype_enum is defined as:

View CUDA Toolkit Documentation for a C++ code example

If srcMemoryType is CU_MEMORYTYPE_HOST, srcHost and srcPitch specify the (host) base address of the source data and the bytes per row to apply. srcArray is ignored.

If srcMemoryType is CU_MEMORYTYPE_UNIFIED, srcDevice and srcPitch specify the (unified virtual address space) base address of the source data and the bytes per row to apply. srcArray is ignored. This value may be used only if unified addressing is supported in the calling context.

If srcMemoryType is CU_MEMORYTYPE_DEVICE, srcDevice and srcPitch specify the (device) base address of the source data and the bytes per row to apply. srcArray is ignored.

If srcMemoryType is CU_MEMORYTYPE_ARRAY, srcArray specifies the handle of the source data. srcHost, srcDevice and srcPitch are ignored.

If dstMemoryType is CU_MEMORYTYPE_UNIFIED, dstDevice and dstPitch specify the (unified virtual address space) base address of the source data and the bytes per row to apply. dstArray is ignored. This value may be used only if unified addressing is supported in the calling context.

If dstMemoryType is CU_MEMORYTYPE_HOST, dstHost and dstPitch specify the (host) base address of the destination data and the bytes per row to apply. dstArray is ignored.

If dstMemoryType is CU_MEMORYTYPE_DEVICE, dstDevice and dstPitch specify the (device) base address of the destination data and the bytes per row to apply. dstArray is ignored.

If dstMemoryType is CU_MEMORYTYPE_ARRAY, dstArray specifies the handle of the destination data. dstHost, dstDevice and dstPitch are ignored.

  • srcXInBytes and srcY specify the base address of the source data for the copy.

For host pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For device pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For CUDA arrays, srcXInBytes must be evenly divisible by the array element size.

  • dstXInBytes and dstY specify the base address of the destination data for the copy.

For host pointers, the base address is

View CUDA Toolkit Documentation for a C++ code example

For device pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For CUDA arrays, dstXInBytes must be evenly divisible by the array element size.

cuMemcpy2DAsync() returns an error if any pitch is greater than the maximum allowed (CU_DEVICE_ATTRIBUTE_MAX_PITCH). cuMemAllocPitch() passes back pitches that always work with cuMemcpy2D(). On intra-device memory copies (device to device, CUDA array to device, CUDA array to CUDA array), cuMemcpy2DAsync() may fail for pitches not computed by cuMemAllocPitch().

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuMemcpy3DAsync(CUDA_MEMCPY3D pCopy: Optional[CUDA_MEMCPY3D], hStream)

Copies memory for 3D arrays.

Perform a 3D memory copy according to the parameters specified in pCopy. The CUDA_MEMCPY3D structure is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • srcMemoryType and dstMemoryType specify the type of memory of the source and destination, respectively; CUmemorytype_enum is defined as:

View CUDA Toolkit Documentation for a C++ code example

If srcMemoryType is CU_MEMORYTYPE_UNIFIED, srcDevice and srcPitch specify the (unified virtual address space) base address of the source data and the bytes per row to apply. srcArray is ignored. This value may be used only if unified addressing is supported in the calling context.

If srcMemoryType is CU_MEMORYTYPE_HOST, srcHost, srcPitch and srcHeight specify the (host) base address of the source data, the bytes per row, and the height of each 2D slice of the 3D array. srcArray is ignored.

If srcMemoryType is CU_MEMORYTYPE_DEVICE, srcDevice, srcPitch and srcHeight specify the (device) base address of the source data, the bytes per row, and the height of each 2D slice of the 3D array. srcArray is ignored.

If srcMemoryType is CU_MEMORYTYPE_ARRAY, srcArray specifies the handle of the source data. srcHost, srcDevice, srcPitch and srcHeight are ignored.

If dstMemoryType is CU_MEMORYTYPE_UNIFIED, dstDevice and dstPitch specify the (unified virtual address space) base address of the source data and the bytes per row to apply. dstArray is ignored. This value may be used only if unified addressing is supported in the calling context.

If dstMemoryType is CU_MEMORYTYPE_HOST, dstHost and dstPitch specify the (host) base address of the destination data, the bytes per row, and the height of each 2D slice of the 3D array. dstArray is ignored.

If dstMemoryType is CU_MEMORYTYPE_DEVICE, dstDevice and dstPitch specify the (device) base address of the destination data, the bytes per row, and the height of each 2D slice of the 3D array. dstArray is ignored.

If dstMemoryType is CU_MEMORYTYPE_ARRAY, dstArray specifies the handle of the destination data. dstHost, dstDevice, dstPitch and dstHeight are ignored.

For host pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For device pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For CUDA arrays, srcXInBytes must be evenly divisible by the array element size.

  • dstXInBytes, dstY and dstZ specify the base address of the destination data for the copy.

For host pointers, the base address is

View CUDA Toolkit Documentation for a C++ code example

For device pointers, the starting address is

View CUDA Toolkit Documentation for a C++ code example

For CUDA arrays, dstXInBytes must be evenly divisible by the array element size.

cuMemcpy3DAsync() returns an error if any pitch is greater than the maximum allowed (CU_DEVICE_ATTRIBUTE_MAX_PITCH).

The srcLOD and dstLOD members of the CUDA_MEMCPY3D structure must be set to 0.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuMemcpy3DPeerAsync(CUDA_MEMCPY3D_PEER pCopy: Optional[CUDA_MEMCPY3D_PEER], hStream)

Copies memory between contexts asynchronously.

Perform a 3D memory copy according to the parameters specified in pCopy. See the definition of the CUDA_MEMCPY3D_PEER structure for documentation of its parameters.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD8(dstDevice, unsigned char uc, size_t N)

Initializes device memory.

Sets the memory range of N 8-bit values to the specified value uc.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • uc (unsigned char) – Value to set

  • N (size_t) – Number of elements

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD16(dstDevice, unsigned short us, size_t N)

Initializes device memory.

Sets the memory range of N 16-bit values to the specified value us. The dstDevice pointer must be two byte aligned.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • us (unsigned short) – Value to set

  • N (size_t) – Number of elements

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD32(dstDevice, unsigned int ui, size_t N)

Initializes device memory.

Sets the memory range of N 32-bit values to the specified value ui. The dstDevice pointer must be four byte aligned.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • ui (unsigned int) – Value to set

  • N (size_t) – Number of elements

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD2D8(dstDevice, size_t dstPitch, unsigned char uc, size_t Width, size_t Height)

Initializes device memory.

Sets the 2D memory range of Width 8-bit values to the specified value uc. Height specifies the number of rows to set, and dstPitch specifies the number of bytes between each row. This function performs fastest when the pitch is one that has been passed back by cuMemAllocPitch().

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • dstPitch (size_t) – Pitch of destination device pointer(Unused if Height is 1)

  • uc (unsigned char) – Value to set

  • Width (size_t) – Width of row

  • Height (size_t) – Number of rows

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD2D16(dstDevice, size_t dstPitch, unsigned short us, size_t Width, size_t Height)

Initializes device memory.

Sets the 2D memory range of Width 16-bit values to the specified value us. Height specifies the number of rows to set, and dstPitch specifies the number of bytes between each row. The dstDevice pointer and dstPitch offset must be two byte aligned. This function performs fastest when the pitch is one that has been passed back by cuMemAllocPitch().

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • dstPitch (size_t) – Pitch of destination device pointer(Unused if Height is 1)

  • us (unsigned short) – Value to set

  • Width (size_t) – Width of row

  • Height (size_t) – Number of rows

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD2D32(dstDevice, size_t dstPitch, unsigned int ui, size_t Width, size_t Height)

Initializes device memory.

Sets the 2D memory range of Width 32-bit values to the specified value ui. Height specifies the number of rows to set, and dstPitch specifies the number of bytes between each row. The dstDevice pointer and dstPitch offset must be four byte aligned. This function performs fastest when the pitch is one that has been passed back by cuMemAllocPitch().

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • dstPitch (size_t) – Pitch of destination device pointer(Unused if Height is 1)

  • ui (unsigned int) – Value to set

  • Width (size_t) – Width of row

  • Height (size_t) – Number of rows

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD8Async(dstDevice, unsigned char uc, size_t N, hStream)

Sets device memory.

Sets the memory range of N 8-bit values to the specified value uc.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • uc (unsigned char) – Value to set

  • N (size_t) – Number of elements

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD16Async(dstDevice, unsigned short us, size_t N, hStream)

Sets device memory.

Sets the memory range of N 16-bit values to the specified value us. The dstDevice pointer must be two byte aligned.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • us (unsigned short) – Value to set

  • N (size_t) – Number of elements

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD32Async(dstDevice, unsigned int ui, size_t N, hStream)

Sets device memory.

Sets the memory range of N 32-bit values to the specified value ui. The dstDevice pointer must be four byte aligned.

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • ui (unsigned int) – Value to set

  • N (size_t) – Number of elements

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD2D8Async(dstDevice, size_t dstPitch, unsigned char uc, size_t Width, size_t Height, hStream)

Sets device memory.

Sets the 2D memory range of Width 8-bit values to the specified value uc. Height specifies the number of rows to set, and dstPitch specifies the number of bytes between each row. This function performs fastest when the pitch is one that has been passed back by cuMemAllocPitch().

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • dstPitch (size_t) – Pitch of destination device pointer(Unused if Height is 1)

  • uc (unsigned char) – Value to set

  • Width (size_t) – Width of row

  • Height (size_t) – Number of rows

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD2D16Async(dstDevice, size_t dstPitch, unsigned short us, size_t Width, size_t Height, hStream)

Sets device memory.

Sets the 2D memory range of Width 16-bit values to the specified value us. Height specifies the number of rows to set, and dstPitch specifies the number of bytes between each row. The dstDevice pointer and dstPitch offset must be two byte aligned. This function performs fastest when the pitch is one that has been passed back by cuMemAllocPitch().

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • dstPitch (size_t) – Pitch of destination device pointer(Unused if Height is 1)

  • us (unsigned short) – Value to set

  • Width (size_t) – Width of row

  • Height (size_t) – Number of rows

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemsetD2D32Async(dstDevice, size_t dstPitch, unsigned int ui, size_t Width, size_t Height, hStream)

Sets device memory.

Sets the 2D memory range of Width 32-bit values to the specified value ui. Height specifies the number of rows to set, and dstPitch specifies the number of bytes between each row. The dstDevice pointer and dstPitch offset must be four byte aligned. This function performs fastest when the pitch is one that has been passed back by cuMemAllocPitch().

Parameters:
  • dstDevice (CUdeviceptr) – Destination device pointer

  • dstPitch (size_t) – Pitch of destination device pointer(Unused if Height is 1)

  • ui (unsigned int) – Value to set

  • Width (size_t) – Width of row

  • Height (size_t) – Number of rows

  • hStream (CUstream or cudaStream_t) – Stream identifier

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuArrayCreate(CUDA_ARRAY_DESCRIPTOR pAllocateArray: Optional[CUDA_ARRAY_DESCRIPTOR])

Creates a 1D or 2D CUDA array.

Creates a CUDA array according to the CUDA_ARRAY_DESCRIPTOR structure pAllocateArray and returns a handle to the new CUDA array in *pHandle. The CUDA_ARRAY_DESCRIPTOR is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • Width, and Height are the width, and height of the CUDA array (in elements); the CUDA array is one-dimensional if height is 0, two- dimensional otherwise;

  • Format specifies the format of the elements; CUarray_format is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • NumChannels specifies the number of packed components per CUDA array element; it may be 1, 2, or 4;

Here are examples of CUDA array descriptions:

Description for a CUDA array of 2048 floats:

View CUDA Toolkit Documentation for a C++ code example

Description for a 64 x 64 CUDA array of floats:

View CUDA Toolkit Documentation for a C++ code example

Description for a width x height CUDA array of 64-bit, 4x16-bit float16’s:

View CUDA Toolkit Documentation for a C++ code example

Description for a width x height CUDA array of 16-bit elements, each of which is two 8-bit unsigned chars:

View CUDA Toolkit Documentation for a C++ code example

Parameters:

pAllocateArray (CUDA_ARRAY_DESCRIPTOR) – Array descriptor

Returns:

cuda.bindings.driver.cuArrayGetDescriptor(hArray)

Get a 1D or 2D CUDA array descriptor.

Returns in *pArrayDescriptor a descriptor containing information on the format and dimensions of the CUDA array hArray. It is useful for subroutines that have been passed a CUDA array, but need to know the CUDA array parameters for validation or other purposes.

Parameters:

hArray (CUarray) – Array to get descriptor of

Returns:

cuda.bindings.driver.cuArrayGetSparseProperties(array)

Returns the layout properties of a sparse CUDA array.

Returns the layout properties of a sparse CUDA array in sparseProperties If the CUDA array is not allocated with flag CUDA_ARRAY3D_SPARSE CUDA_ERROR_INVALID_VALUE will be returned.

If the returned value in flags contains CU_ARRAY_SPARSE_PROPERTIES_SINGLE_MIPTAIL, then miptailSize represents the total size of the array. Otherwise, it will be zero. Also, the returned value in miptailFirstLevel is always zero. Note that the array must have been allocated using cuArrayCreate or cuArray3DCreate. For CUDA arrays obtained using cuMipmappedArrayGetLevel, CUDA_ERROR_INVALID_VALUE will be returned. Instead, cuMipmappedArrayGetSparseProperties must be used to obtain the sparse properties of the entire CUDA mipmapped array to which array belongs to.

Parameters:

array (CUarray) – CUDA array to get the sparse properties of

Returns:

cuda.bindings.driver.cuMipmappedArrayGetSparseProperties(mipmap)

Returns the layout properties of a sparse CUDA mipmapped array.

Returns the sparse array layout properties in sparseProperties If the CUDA mipmapped array is not allocated with flag CUDA_ARRAY3D_SPARSE CUDA_ERROR_INVALID_VALUE will be returned.

For non-layered CUDA mipmapped arrays, miptailSize returns the size of the mip tail region. The mip tail region includes all mip levels whose width, height or depth is less than that of the tile. For layered CUDA mipmapped arrays, if flags contains CU_ARRAY_SPARSE_PROPERTIES_SINGLE_MIPTAIL, then miptailSize specifies the size of the mip tail of all layers combined. Otherwise, miptailSize specifies mip tail size per layer. The returned value of miptailFirstLevel is valid only if miptailSize is non- zero.

Parameters:

mipmap (CUmipmappedArray) – CUDA mipmapped array to get the sparse properties of

Returns:

cuda.bindings.driver.cuArrayGetMemoryRequirements(array, device)

Returns the memory requirements of a CUDA array.

Returns the memory requirements of a CUDA array in memoryRequirements If the CUDA array is not allocated with flag CUDA_ARRAY3D_DEFERRED_MAPPING CUDA_ERROR_INVALID_VALUE will be returned.

The returned value in size represents the total size of the CUDA array. The returned value in alignment represents the alignment necessary for mapping the CUDA array.

Parameters:
  • array (CUarray) – CUDA array to get the memory requirements of

  • device (CUdevice) – Device to get the memory requirements for

Returns:

cuda.bindings.driver.cuMipmappedArrayGetMemoryRequirements(mipmap, device)

Returns the memory requirements of a CUDA mipmapped array.

Returns the memory requirements of a CUDA mipmapped array in memoryRequirements If the CUDA mipmapped array is not allocated with flag CUDA_ARRAY3D_DEFERRED_MAPPING CUDA_ERROR_INVALID_VALUE will be returned.

The returned value in size represents the total size of the CUDA mipmapped array. The returned value in alignment represents the alignment necessary for mapping the CUDA mipmapped array.

Parameters:
  • mipmap (CUmipmappedArray) – CUDA mipmapped array to get the memory requirements of

  • device (CUdevice) – Device to get the memory requirements for

Returns:

cuda.bindings.driver.cuArrayGetPlane(hArray, unsigned int planeIdx)

Gets a CUDA array plane from a CUDA array.

Returns in pPlaneArray a CUDA array that represents a single format plane of the CUDA array hArray.

If planeIdx is greater than the maximum number of planes in this array or if the array does not have a multi-planar format e.g: CU_AD_FORMAT_NV12, then CUDA_ERROR_INVALID_VALUE is returned.

Note that if the hArray has format CU_AD_FORMAT_NV12, then passing in 0 for planeIdx returns a CUDA array of the same size as hArray but with one channel and CU_AD_FORMAT_UNSIGNED_INT8 as its format. If 1 is passed for planeIdx, then the returned CUDA array has half the height and width of hArray with two channels and CU_AD_FORMAT_UNSIGNED_INT8 as its format.

Parameters:
  • hArray (CUarray) – Multiplanar CUDA array

  • planeIdx (unsigned int) – Plane index

Returns:

cuda.bindings.driver.cuArrayDestroy(hArray)

Destroys a CUDA array.

Destroys the CUDA array hArray.

Parameters:

hArray (CUarray) – Array to destroy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_ARRAY_IS_MAPPED, CUDA_ERROR_CONTEXT_IS_DESTROYED

Return type:

CUresult

cuda.bindings.driver.cuArray3DCreate(CUDA_ARRAY3D_DESCRIPTOR pAllocateArray: Optional[CUDA_ARRAY3D_DESCRIPTOR])

Creates a 3D CUDA array.

Creates a CUDA array according to the CUDA_ARRAY3D_DESCRIPTOR structure pAllocateArray and returns a handle to the new CUDA array in *pHandle. The CUDA_ARRAY3D_DESCRIPTOR is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • Width, Height, and Depth are the width, height, and depth of the CUDA array (in elements); the following types of CUDA arrays can be allocated:

    • A 1D array is allocated if Height and Depth extents are both zero.

    • A 2D array is allocated if only Depth extent is zero.

    • A 3D array is allocated if all three extents are non-zero.

    • A 1D layered CUDA array is allocated if only Height is zero and the CUDA_ARRAY3D_LAYERED flag is set. Each layer is a 1D array. The number of layers is determined by the depth extent.

    • A 2D layered CUDA array is allocated if all three extents are non- zero and the CUDA_ARRAY3D_LAYERED flag is set. Each layer is a 2D array. The number of layers is determined by the depth extent.

    • A cubemap CUDA array is allocated if all three extents are non-zero and the CUDA_ARRAY3D_CUBEMAP flag is set. Width must be equal to Height, and Depth must be six. A cubemap is a special type of 2D layered CUDA array, where the six layers represent the six faces of a cube. The order of the six layers in memory is the same as that listed in CUarray_cubemap_face.

    • A cubemap layered CUDA array is allocated if all three extents are non-zero, and both, CUDA_ARRAY3D_CUBEMAP and CUDA_ARRAY3D_LAYERED flags are set. Width must be equal to Height, and Depth must be a multiple of six. A cubemap layered CUDA array is a special type of 2D layered CUDA array that consists of a collection of cubemaps. The first six layers represent the first cubemap, the next six layers form the second cubemap, and so on.

  • Format specifies the format of the elements; CUarray_format is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • NumChannels specifies the number of packed components per CUDA array element; it may be 1, 2, or 4;

  • Flags may be set to

    • CUDA_ARRAY3D_LAYERED to enable creation of layered CUDA arrays. If this flag is set, Depth specifies the number of layers, not the depth of a 3D array.

    • CUDA_ARRAY3D_SURFACE_LDST to enable surface references to be bound to the CUDA array. If this flag is not set, cuSurfRefSetArray will fail when attempting to bind the CUDA array to a surface reference.

    • CUDA_ARRAY3D_CUBEMAP to enable creation of cubemaps. If this flag is set, Width must be equal to Height, and Depth must be six. If the CUDA_ARRAY3D_LAYERED flag is also set, then Depth must be a multiple of six.

    • CUDA_ARRAY3D_TEXTURE_GATHER to indicate that the CUDA array will be used for texture gather. Texture gather can only be performed on 2D CUDA arrays.

Width, Height and Depth must meet certain size requirements as listed in the following table. All values are specified in elements. Note that for brevity’s sake, the full name of the device attribute is not specified. For ex., TEXTURE1D_WIDTH refers to the device attribute CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE1D_WIDTH.

Note that 2D CUDA arrays have different size requirements if the CUDA_ARRAY3D_TEXTURE_GATHER flag is set. Width and Height must not be greater than CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_GATHER_WIDTH and CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_GATHER_HEIGHT respectively, in that case.

View CUDA Toolkit Documentation for a table example

Here are examples of CUDA array descriptions:

Description for a CUDA array of 2048 floats:

View CUDA Toolkit Documentation for a C++ code example

Description for a 64 x 64 CUDA array of floats:

View CUDA Toolkit Documentation for a C++ code example

Description for a width x height x depth CUDA array of 64-bit, 4x16-bit float16’s:

View CUDA Toolkit Documentation for a C++ code example

Parameters:

pAllocateArray (CUDA_ARRAY3D_DESCRIPTOR) – 3D array descriptor

Returns:

cuda.bindings.driver.cuArray3DGetDescriptor(hArray)

Get a 3D CUDA array descriptor.

Returns in *pArrayDescriptor a descriptor containing information on the format and dimensions of the CUDA array hArray. It is useful for subroutines that have been passed a CUDA array, but need to know the CUDA array parameters for validation or other purposes.

This function may be called on 1D and 2D arrays, in which case the Height and/or Depth members of the descriptor struct will be set to 0.

Parameters:

hArray (CUarray) – 3D array to get descriptor of

Returns:

cuda.bindings.driver.cuMipmappedArrayCreate(CUDA_ARRAY3D_DESCRIPTOR pMipmappedArrayDesc: Optional[CUDA_ARRAY3D_DESCRIPTOR], unsigned int numMipmapLevels)

Creates a CUDA mipmapped array.

Creates a CUDA mipmapped array according to the CUDA_ARRAY3D_DESCRIPTOR structure pMipmappedArrayDesc and returns a handle to the new CUDA mipmapped array in *pHandle. numMipmapLevels specifies the number of mipmap levels to be allocated. This value is clamped to the range [1, 1 + floor(log2(max(width, height, depth)))].

The CUDA_ARRAY3D_DESCRIPTOR is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • Width, Height, and Depth are the width, height, and depth of the CUDA array (in elements); the following types of CUDA arrays can be allocated:

    • A 1D mipmapped array is allocated if Height and Depth extents are both zero.

    • A 2D mipmapped array is allocated if only Depth extent is zero.

    • A 3D mipmapped array is allocated if all three extents are non- zero.

    • A 1D layered CUDA mipmapped array is allocated if only Height is zero and the CUDA_ARRAY3D_LAYERED flag is set. Each layer is a 1D array. The number of layers is determined by the depth extent.

    • A 2D layered CUDA mipmapped array is allocated if all three extents are non-zero and the CUDA_ARRAY3D_LAYERED flag is set. Each layer is a 2D array. The number of layers is determined by the depth extent.

    • A cubemap CUDA mipmapped array is allocated if all three extents are non-zero and the CUDA_ARRAY3D_CUBEMAP flag is set. Width must be equal to Height, and Depth must be six. A cubemap is a special type of 2D layered CUDA array, where the six layers represent the six faces of a cube. The order of the six layers in memory is the same as that listed in CUarray_cubemap_face.

    • A cubemap layered CUDA mipmapped array is allocated if all three extents are non-zero, and both, CUDA_ARRAY3D_CUBEMAP and CUDA_ARRAY3D_LAYERED flags are set. Width must be equal to Height, and Depth must be a multiple of six. A cubemap layered CUDA array is a special type of 2D layered CUDA array that consists of a collection of cubemaps. The first six layers represent the first cubemap, the next six layers form the second cubemap, and so on.

  • Format specifies the format of the elements; CUarray_format is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • NumChannels specifies the number of packed components per CUDA array element; it may be 1, 2, or 4;

  • Flags may be set to

    • CUDA_ARRAY3D_LAYERED to enable creation of layered CUDA mipmapped arrays. If this flag is set, Depth specifies the number of layers, not the depth of a 3D array.

    • CUDA_ARRAY3D_SURFACE_LDST to enable surface references to be bound to individual mipmap levels of the CUDA mipmapped array. If this flag is not set, cuSurfRefSetArray will fail when attempting to bind a mipmap level of the CUDA mipmapped array to a surface reference.

  • CUDA_ARRAY3D_CUBEMAP to enable creation of mipmapped

cubemaps. If this flag is set, Width must be equal to Height, and Depth must be six. If the CUDA_ARRAY3D_LAYERED flag is also set, then Depth must be a multiple of six.

  • CUDA_ARRAY3D_TEXTURE_GATHER to indicate that the CUDA mipmapped array will be used for texture gather. Texture gather can only be performed on 2D CUDA mipmapped arrays.

Width, Height and Depth must meet certain size requirements as listed in the following table. All values are specified in elements. Note that for brevity’s sake, the full name of the device attribute is not specified. For ex., TEXTURE1D_MIPMAPPED_WIDTH refers to the device attribute CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE1D_MIPMAPPED_WIDTH.

View CUDA Toolkit Documentation for a table example

Parameters:
  • pMipmappedArrayDesc (CUDA_ARRAY3D_DESCRIPTOR) – mipmapped array descriptor

  • numMipmapLevels (unsigned int) – Number of mipmap levels

Returns:

cuda.bindings.driver.cuMipmappedArrayGetLevel(hMipmappedArray, unsigned int level)

Gets a mipmap level of a CUDA mipmapped array.

Returns in *pLevelArray a CUDA array that represents a single mipmap level of the CUDA mipmapped array hMipmappedArray.

If level is greater than the maximum number of levels in this mipmapped array, CUDA_ERROR_INVALID_VALUE is returned.

Parameters:
  • hMipmappedArray (CUmipmappedArray) – CUDA mipmapped array

  • level (unsigned int) – Mipmap level

Returns:

cuda.bindings.driver.cuMipmappedArrayDestroy(hMipmappedArray)

Destroys a CUDA mipmapped array.

Destroys the CUDA mipmapped array hMipmappedArray.

Parameters:

hMipmappedArray (CUmipmappedArray) – Mipmapped array to destroy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_ARRAY_IS_MAPPED, CUDA_ERROR_CONTEXT_IS_DESTROYED

Return type:

CUresult

cuda.bindings.driver.cuMemGetHandleForAddressRange(dptr, size_t size, handleType: CUmemRangeHandleType, unsigned long long flags)

Retrieve handle for an address range.

Get a handle of the specified type to an address range. The address range must have been obtained by a prior call to either cuMemAlloc or cuMemAddressReserve. If the address range was obtained via cuMemAddressReserve, it must also be fully mapped via cuMemMap. The address range must have been obtained by a prior call to either cuMemAllocHost or cuMemHostAlloc on Tegra.

Users must ensure the dptr and size are aligned to the host page size.

When requesting CUmemRangeHandleType::CU_MEM_RANGE_HANDLE_TYPE_DMA_BUF_FD, users are expected to query for dma_buf support for the platform by using CU_DEVICE_ATTRIBUTE_DMA_BUF_SUPPORTED device attribute before calling this API. The handle will be interpreted as a pointer to an integer to store the dma_buf file descriptor. Users must ensure the entire address range is backed and mapped when the address range is allocated by cuMemAddressReserve. All the physical allocations backing the address range must be resident on the same device and have identical allocation properties. Users are also expected to retrieve a new handle every time the underlying physical allocation(s) corresponding to a previously queried VA range are changed.

Parameters:
  • dptr (CUdeviceptr) – Pointer to a valid CUDA device allocation. Must be aligned to host page size.

  • size (size_t) – Length of the address range. Must be aligned to host page size.

  • handleType (CUmemRangeHandleType) – Type of handle requested (defines type and size of the handle output parameter)

  • flags (unsigned long long) – Reserved, must be zero

Returns:

  • CUresult – CUDA_SUCCESS CUDA_ERROR_INVALID_VALUE CUDA_ERROR_NOT_SUPPORTED

  • handle (Any) – Pointer to the location where the returned handle will be stored.

Virtual Memory Management

This section describes the virtual memory management functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuMemAddressReserve(size_t size, size_t alignment, addr, unsigned long long flags)

Allocate an address range reservation.

Reserves a virtual address range based on the given parameters, giving the starting address of the range in ptr. This API requires a system that supports UVA. The size and address parameters must be a multiple of the host page size and the alignment must be a power of two or zero for default alignment.

Parameters:
  • size (size_t) – Size of the reserved virtual address range requested

  • alignment (size_t) – Alignment of the reserved virtual address range requested

  • addr (CUdeviceptr) – Fixed starting address range requested

  • flags (unsigned long long) – Currently unused, must be zero

Returns:

See also

cuMemAddressFree

cuda.bindings.driver.cuMemAddressFree(ptr, size_t size)

Free an address range reservation.

Frees a virtual address range reserved by cuMemAddressReserve. The size must match what was given to memAddressReserve and the ptr given must match what was returned from memAddressReserve.

Parameters:
  • ptr (CUdeviceptr) – Starting address of the virtual address range to free

  • size (size_t) – Size of the virtual address region to free

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_PERMITTED, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuMemCreate(size_t size, CUmemAllocationProp prop: Optional[CUmemAllocationProp], unsigned long long flags)

Create a CUDA memory handle representing a memory allocation of a given size described by the given properties.

This creates a memory allocation on the target device specified through the prop structure. The created allocation will not have any device or host mappings. The generic memory handle for the allocation can be mapped to the address space of calling process via cuMemMap. This handle cannot be transmitted directly to other processes (see cuMemExportToShareableHandle). On Windows, the caller must also pass an LPSECURITYATTRIBUTE in prop to be associated with this handle which limits or allows access to this handle for a recipient process (see win32HandleMetaData for more). The size of this allocation must be a multiple of the the value given via cuMemGetAllocationGranularity with the CU_MEM_ALLOC_GRANULARITY_MINIMUM flag. To create a CPU allocation targeting a specific host NUMA node, applications must set CUmemAllocationProp::CUmemLocation::type to CU_MEM_LOCATION_TYPE_HOST_NUMA and CUmemAllocationProp::CUmemLocation::id must specify the NUMA ID of the CPU. On systems where NUMA is not available CUmemAllocationProp::CUmemLocation::id must be set to 0. Specifying CU_MEM_LOCATION_TYPE_HOST_NUMA_CURRENT or CU_MEM_LOCATION_TYPE_HOST as the type will result in CUDA_ERROR_INVALID_VALUE.

Applications can set requestedHandleTypes to CU_MEM_HANDLE_TYPE_FABRIC in order to create allocations suitable for sharing within an IMEX domain. An IMEX domain is either an OS instance or a group of securely connected OS instances using the NVIDIA IMEX daemon. An IMEX channel is a global resource within the IMEX domain that represents a logical entity that aims to provide fine grained accessibility control for the participating processes. When exporter and importer CUDA processes have been granted access to the same IMEX channel, they can securely share memory. If the allocating process does not have access setup for an IMEX channel, attempting to create a CUmemGenericAllocationHandle with CU_MEM_HANDLE_TYPE_FABRIC will result in CUDA_ERROR_NOT_PERMITTED. The nvidia-modprobe CLI provides more information regarding setting up of IMEX channels.

If CUmemAllocationProp::allocFlags::usage contains CU_MEM_CREATE_USAGE_TILE_POOL flag then the memory allocation is intended only to be used as backing tile pool for sparse CUDA arrays and sparse CUDA mipmapped arrays. (see cuMemMapArrayAsync).

Parameters:
  • size (size_t) – Size of the allocation requested

  • prop (CUmemAllocationProp) – Properties of the allocation to create.

  • flags (unsigned long long) – flags for future use, must be zero now.

Returns:

cuda.bindings.driver.cuMemRelease(handle)

Release a memory handle representing a memory allocation which was previously allocated through cuMemCreate.

Frees the memory that was allocated on a device through cuMemCreate.

The memory allocation will be freed when all outstanding mappings to the memory are unmapped and when all outstanding references to the handle (including it’s shareable counterparts) are also released. The generic memory handle can be freed when there are still outstanding mappings made with this handle. Each time a recipient process imports a shareable handle, it needs to pair it with cuMemRelease for the handle to be freed. If handle is not a valid handle the behavior is undefined.

Parameters:

handle (CUmemGenericAllocationHandle) – Value of handle which was returned previously by cuMemCreate.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_PERMITTED, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

See also

cuMemCreate

cuda.bindings.driver.cuMemMap(ptr, size_t size, size_t offset, handle, unsigned long long flags)

Maps an allocation handle to a reserved virtual address range.

Maps bytes of memory represented by handle starting from byte offset to size to address range [addr, addr + size]. This range must be an address reservation previously reserved with cuMemAddressReserve, and offset + size must be less than the size of the memory allocation. Both ptr, size, and offset must be a multiple of the value given via cuMemGetAllocationGranularity with the CU_MEM_ALLOC_GRANULARITY_MINIMUM flag. If handle represents a multicast object, ptr, size and offset must be aligned to the value returned by cuMulticastGetGranularity with the flag CU_MULTICAST_MINIMUM_GRANULARITY. For best performance however, it is recommended that ptr, size and offset be aligned to the value returned by cuMulticastGetGranularity with the flag CU_MULTICAST_RECOMMENDED_GRANULARITY.

Please note calling cuMemMap does not make the address accessible, the caller needs to update accessibility of a contiguous mapped VA range by calling cuMemSetAccess.

Once a recipient process obtains a shareable memory handle from cuMemImportFromShareableHandle, the process must use cuMemMap to map the memory into its address ranges before setting accessibility with cuMemSetAccess.

cuMemMap can only create mappings on VA range reservations that are not currently mapped.

Parameters:
  • ptr (CUdeviceptr) – Address where memory will be mapped.

  • size (size_t) – Size of the memory mapping.

  • offset (size_t) – Offset into the memory represented by

  • handle (CUmemGenericAllocationHandle) – Handle to a shareable memory

  • flags (unsigned long long) – flags for future use, must be zero now.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_PERMITTED, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuMemMapArrayAsync(mapInfoList: Optional[Tuple[CUarrayMapInfo] | List[CUarrayMapInfo]], unsigned int count, hStream)

Maps or unmaps subregions of sparse CUDA arrays and sparse CUDA mipmapped arrays.

Performs map or unmap operations on subregions of sparse CUDA arrays and sparse CUDA mipmapped arrays. Each operation is specified by a CUarrayMapInfo entry in the mapInfoList array of size count. The structure CUarrayMapInfo is defined as follow:

View CUDA Toolkit Documentation for a C++ code example

where resourceType specifies the type of resource to be operated on. If resourceType is set to CUresourcetype::CU_RESOURCE_TYPE_ARRAY then CUarrayMapInfo::resource::array must be set to a valid sparse CUDA array handle. The CUDA array must be either a 2D, 2D layered or 3D CUDA array and must have been allocated using cuArrayCreate or cuArray3DCreate with the flag CUDA_ARRAY3D_SPARSE or CUDA_ARRAY3D_DEFERRED_MAPPING. For CUDA arrays obtained using cuMipmappedArrayGetLevel, CUDA_ERROR_INVALID_VALUE will be returned. If resourceType is set to CUresourcetype::CU_RESOURCE_TYPE_MIPMAPPED_ARRAY then CUarrayMapInfo::resource::mipmap must be set to a valid sparse CUDA mipmapped array handle. The CUDA mipmapped array must be either a 2D, 2D layered or 3D CUDA mipmapped array and must have been allocated using cuMipmappedArrayCreate with the flag CUDA_ARRAY3D_SPARSE or CUDA_ARRAY3D_DEFERRED_MAPPING.

subresourceType specifies the type of subresource within the resource. CUarraySparseSubresourceType_enum is defined as:

View CUDA Toolkit Documentation for a C++ code example

where CUarraySparseSubresourceType::CU_ARRAY_SPARSE_SUBRESOURCE_TYPE_SPARSE_LEVEL indicates a sparse-miplevel which spans at least one tile in every dimension. The remaining miplevels which are too small to span at least one tile in any dimension constitute the mip tail region as indicated by CUarraySparseSubresourceType::CU_ARRAY_SPARSE_SUBRESOURCE_TYPE_MIPTAIL subresource type.

If subresourceType is set to CUarraySparseSubresourceType::CU_ARRAY_SPARSE_SUBRESOURCE_TYPE_SPARSE_LEVEL then CUarrayMapInfo::subresource::sparseLevel struct must contain valid array subregion offsets and extents. The CUarrayMapInfo::subresource::sparseLevel::offsetX, CUarrayMapInfo::subresource::sparseLevel::offsetY and CUarrayMapInfo::subresource::sparseLevel::offsetZ must specify valid X, Y and Z offsets respectively. The CUarrayMapInfo::subresource::sparseLevel::extentWidth, CUarrayMapInfo::subresource::sparseLevel::extentHeight and CUarrayMapInfo::subresource::sparseLevel::extentDepth must specify valid width, height and depth extents respectively. These offsets and extents must be aligned to the corresponding tile dimension. For CUDA mipmapped arrays CUarrayMapInfo::subresource::sparseLevel::level must specify a valid mip level index. Otherwise, must be zero. For layered CUDA arrays and layered CUDA mipmapped arrays CUarrayMapInfo::subresource::sparseLevel::layer must specify a valid layer index. Otherwise, must be zero. CUarrayMapInfo::subresource::sparseLevel::offsetZ must be zero and CUarrayMapInfo::subresource::sparseLevel::extentDepth must be set to 1 for 2D and 2D layered CUDA arrays and CUDA mipmapped arrays. Tile extents can be obtained by calling cuArrayGetSparseProperties and cuMipmappedArrayGetSparseProperties

If subresourceType is set to CUarraySparseSubresourceType::CU_ARRAY_SPARSE_SUBRESOURCE_TYPE_MIPTAIL then CUarrayMapInfo::subresource::miptail struct must contain valid mip tail offset in CUarrayMapInfo::subresource::miptail::offset and size in CUarrayMapInfo::subresource::miptail::size. Both, mip tail offset and mip tail size must be aligned to the tile size. For layered CUDA mipmapped arrays which don’t have the flag CU_ARRAY_SPARSE_PROPERTIES_SINGLE_MIPTAIL set in flags as returned by cuMipmappedArrayGetSparseProperties, CUarrayMapInfo::subresource::miptail::layer must specify a valid layer index. Otherwise, must be zero.

If CUarrayMapInfo::resource::array or CUarrayMapInfo::resource::mipmap was created with CUDA_ARRAY3D_DEFERRED_MAPPING flag set the subresourceType and the contents of CUarrayMapInfo::subresource will be ignored.

memOperationType specifies the type of operation. CUmemOperationType is defined as:

View CUDA Toolkit Documentation for a C++ code example

If memOperationType is set to CUmemOperationType::CU_MEM_OPERATION_TYPE_MAP then the subresource will be mapped onto the tile pool memory specified by CUarrayMapInfo::memHandle at offset offset. The tile pool allocation has to be created by specifying the CU_MEM_CREATE_USAGE_TILE_POOL flag when calling cuMemCreate. Also, memHandleType must be set to CUmemHandleType::CU_MEM_HANDLE_TYPE_GENERIC.

If memOperationType is set to CUmemOperationType::CU_MEM_OPERATION_TYPE_UNMAP then an unmapping operation is performed. CUarrayMapInfo::memHandle must be NULL.

deviceBitMask specifies the list of devices that must map or unmap physical memory. Currently, this mask must have exactly one bit set, and the corresponding device must match the device associated with the stream. If memOperationType is set to CUmemOperationType::CU_MEM_OPERATION_TYPE_MAP, the device must also match the device associated with the tile pool memory allocation as specified by CUarrayMapInfo::memHandle.

flags and :py:obj:`~.CUarrayMapInfo.reserved`[] are unused and must be set to zero.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuMemUnmap(ptr, size_t size)

Unmap the backing memory of a given address range.

The range must be the entire contiguous address range that was mapped to. In other words, cuMemUnmap cannot unmap a sub-range of an address range mapped by cuMemCreate / cuMemMap. Any backing memory allocations will be freed if there are no existing mappings and there are no unreleased memory handles.

When cuMemUnmap returns successfully the address range is converted to an address reservation and can be used for a future calls to cuMemMap. Any new mapping to this virtual address will need to have access granted through cuMemSetAccess, as all mappings start with no accessibility setup.

Parameters:
  • ptr (CUdeviceptr) – Starting address for the virtual address range to unmap

  • size (size_t) – Size of the virtual address range to unmap

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_PERMITTED, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuMemSetAccess(ptr, size_t size, desc: Optional[Tuple[CUmemAccessDesc] | List[CUmemAccessDesc]], size_t count)

Set the access flags for each location specified in desc for the given virtual address range.

Given the virtual address range via ptr and size, and the locations in the array given by desc and count, set the access flags for the target locations. The range must be a fully mapped address range containing all allocations created by cuMemMap / cuMemCreate. Users cannot specify CU_MEM_LOCATION_TYPE_HOST_NUMA accessibility for allocations created on with other location types. Note: When CUmemAccessDesc::CUmemLocation::type is CU_MEM_LOCATION_TYPE_HOST_NUMA, CUmemAccessDesc::CUmemLocation::id is ignored. When setting the access flags for a virtual address range mapping a multicast object, ptr and size must be aligned to the value returned by cuMulticastGetGranularity with the flag CU_MULTICAST_MINIMUM_GRANULARITY. For best performance however, it is recommended that ptr and size be aligned to the value returned by cuMulticastGetGranularity with the flag CU_MULTICAST_RECOMMENDED_GRANULARITY.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuMemGetAccess(CUmemLocation location: Optional[CUmemLocation], ptr)

Get the access flags set for the given location and ptr.

Parameters:
  • location (CUmemLocation) – Location in which to check the flags for

  • ptr (CUdeviceptr) – Address in which to check the access flags for

Returns:

See also

cuMemSetAccess

cuda.bindings.driver.cuMemExportToShareableHandle(handle, handleType: CUmemAllocationHandleType, unsigned long long flags)

Exports an allocation to a requested shareable handle type.

Given a CUDA memory handle, create a shareable memory allocation handle that can be used to share the memory with other processes. The recipient process can convert the shareable handle back into a CUDA memory handle using cuMemImportFromShareableHandle and map it with cuMemMap. The implementation of what this handle is and how it can be transferred is defined by the requested handle type in handleType

Once all shareable handles are closed and the allocation is released, the allocated memory referenced will be released back to the OS and uses of the CUDA handle afterward will lead to undefined behavior.

This API can also be used in conjunction with other APIs (e.g. Vulkan, OpenGL) that support importing memory from the shareable type

Parameters:
  • handle (CUmemGenericAllocationHandle) – CUDA handle for the memory allocation

  • handleType (CUmemAllocationHandleType) – Type of shareable handle requested (defines type and size of the shareableHandle output parameter)

  • flags (unsigned long long) – Reserved, must be zero

Returns:

cuda.bindings.driver.cuMemImportFromShareableHandle(osHandle, shHandleType: CUmemAllocationHandleType)

Imports an allocation from a requested shareable handle type.

If the current process cannot support the memory described by this shareable handle, this API will error as CUDA_ERROR_NOT_SUPPORTED.

If shHandleType is CU_MEM_HANDLE_TYPE_FABRIC and the importer process has not been granted access to the same IMEX channel as the exporter process, this API will error as CUDA_ERROR_NOT_PERMITTED.

Parameters:
Returns:

Notes

Importing shareable handles exported from some graphics APIs(VUlkan, OpenGL, etc) created on devices under an SLI group may not be supported, and thus this API will return CUDA_ERROR_NOT_SUPPORTED. There is no guarantee that the contents of handle will be the same CUDA memory handle for the same given OS shareable handle, or the same underlying allocation.

cuda.bindings.driver.cuMemGetAllocationGranularity(CUmemAllocationProp prop: Optional[CUmemAllocationProp], option: CUmemAllocationGranularity_flags)

Calculates either the minimal or recommended granularity.

Calculates either the minimal or recommended granularity for a given allocation specification and returns it in granularity. This granularity can be used as a multiple for alignment, size, or address mapping.

Parameters:
Returns:

See also

cuMemCreate, cuMemMap

cuda.bindings.driver.cuMemGetAllocationPropertiesFromHandle(handle)

Retrieve the contents of the property structure defining properties for this handle.

Parameters:

handle (CUmemGenericAllocationHandle) – Handle which to perform the query on

Returns:

cuda.bindings.driver.cuMemRetainAllocationHandle(addr)

Given an address addr, returns the allocation handle of the backing memory allocation.

The handle is guaranteed to be the same handle value used to map the memory. If the address requested is not mapped, the function will fail. The returned handle must be released with corresponding number of calls to cuMemRelease.

Parameters:

addr (Any) – Memory address to query, that has been mapped previously.

Returns:

Notes

The address addr, can be any address in a range previously mapped by cuMemMap, and not necessarily the start address.

Stream Ordered Memory Allocator

This section describes the stream ordered memory allocator exposed by the low-level CUDA driver application programming interface.

overview

The asynchronous allocator allows the user to allocate and free in stream order. All asynchronous accesses of the allocation must happen between the stream executions of the allocation and the free. If the memory is accessed outside of the promised stream order, a use before allocation / use after free error will cause undefined behavior.

The allocator is free to reallocate the memory as long as it can guarantee that compliant memory accesses will not overlap temporally. The allocator may refer to internal stream ordering as well as inter-stream dependencies (such as CUDA events and null stream dependencies) when establishing the temporal guarantee. The allocator may also insert inter-stream dependencies to establish the temporal guarantee.

Supported Platforms

Whether or not a device supports the integrated stream ordered memory allocator may be queried by calling cuDeviceGetAttribute() with the device attribute CU_DEVICE_ATTRIBUTE_MEMORY_POOLS_SUPPORTED

cuda.bindings.driver.cuMemFreeAsync(dptr, hStream)

Frees memory with stream ordered semantics.

Inserts a free operation into hStream. The allocation must not be accessed after stream execution reaches the free. After this API returns, accessing the memory from any subsequent work launched on the GPU or querying its pointer attributes results in undefined behavior.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT (default stream specified with no current context), CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

Notes

During stream capture, this function results in the creation of a free node and must therefore be passed the address of a graph allocation.

cuda.bindings.driver.cuMemAllocAsync(size_t bytesize, hStream)

Allocates memory with stream ordered semantics.

Inserts an allocation operation into hStream. A pointer to the allocated memory is returned immediately in *dptr. The allocation must not be accessed until the the allocation operation completes. The allocation comes from the memory pool current to the stream’s device.

Parameters:
  • bytesize (size_t) – Number of bytes to allocate

  • hStream (CUstream or cudaStream_t) – The stream establishing the stream ordering contract and the memory pool to allocate from

Returns:

Notes

The default memory pool of a device contains device memory from that device.

Basic stream ordering allows future work submitted into the same stream to use the allocation. Stream query, stream synchronize, and CUDA events can be used to guarantee that the allocation operation completes before work submitted in a separate stream runs.

During stream capture, this function results in the creation of an allocation node. In this case, the allocation is owned by the graph instead of the memory pool. The memory pool’s properties are used to set the node’s creation parameters.

cuda.bindings.driver.cuMemPoolTrimTo(pool, size_t minBytesToKeep)

Tries to release memory back to the OS.

Releases memory back to the OS until the pool contains fewer than minBytesToKeep reserved bytes, or there is no more memory that the allocator can safely release. The allocator cannot release OS allocations that back outstanding asynchronous allocations. The OS allocations may happen at different granularity from the user allocations.

Parameters:
  • pool (CUmemoryPool or cudaMemPool_t) – The memory pool to trim

  • minBytesToKeep (size_t) – If the pool has less than minBytesToKeep reserved, the TrimTo operation is a no-op. Otherwise the pool will be guaranteed to have at least minBytesToKeep bytes reserved after the operation.

Returns:

CUDA_SUCCESS, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

Notes

: Allocations that have not been freed count as outstanding.

: Allocations that have been asynchronously freed but whose completion has not been observed on the host (eg. by a synchronize) can count as outstanding.

cuda.bindings.driver.cuMemPoolSetAttribute(pool, attr: CUmemPool_attribute, value)

Sets attributes of a memory pool.

Supported attributes are:

  • CU_MEMPOOL_ATTR_RELEASE_THRESHOLD: (value type = cuuint64_t) Amount of reserved memory in bytes to hold onto before trying to release memory back to the OS. When more than the release threshold bytes of memory are held by the memory pool, the allocator will try to release memory back to the OS on the next call to stream, event or context synchronize. (default 0)

  • CU_MEMPOOL_ATTR_REUSE_FOLLOW_EVENT_DEPENDENCIES: (value type = int) Allow cuMemAllocAsync to use memory asynchronously freed in another stream as long as a stream ordering dependency of the allocating stream on the free action exists. Cuda events and null stream interactions can create the required stream ordered dependencies. (default enabled)

  • CU_MEMPOOL_ATTR_REUSE_ALLOW_OPPORTUNISTIC: (value type = int) Allow reuse of already completed frees when there is no dependency between the free and allocation. (default enabled)

  • CU_MEMPOOL_ATTR_REUSE_ALLOW_INTERNAL_DEPENDENCIES: (value type = int) Allow cuMemAllocAsync to insert new stream dependencies in order to establish the stream ordering required to reuse a piece of memory released by cuMemFreeAsync (default enabled).

  • CU_MEMPOOL_ATTR_RESERVED_MEM_HIGH: (value type = cuuint64_t) Reset the high watermark that tracks the amount of backing memory that was allocated for the memory pool. It is illegal to set this attribute to a non-zero value.

  • CU_MEMPOOL_ATTR_USED_MEM_HIGH: (value type = cuuint64_t) Reset the high watermark that tracks the amount of used memory that was allocated for the memory pool.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemPoolGetAttribute(pool, attr: CUmemPool_attribute)

Gets attributes of a memory pool.

Supported attributes are:

Parameters:
Returns:

cuda.bindings.driver.cuMemPoolSetAccess(pool, map: Optional[Tuple[CUmemAccessDesc] | List[CUmemAccessDesc]], size_t count)

Controls visibility of pools between devices.

Parameters:
  • pool (CUmemoryPool or cudaMemPool_t) – The pool being modified

  • map (List[CUmemAccessDesc]) – Array of access descriptors. Each descriptor instructs the access to enable for a single gpu.

  • count (size_t) – Number of descriptors in the map array.

Returns:

CUDA_SUCCESS, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuMemPoolGetAccess(memPool, CUmemLocation location: Optional[CUmemLocation])

Returns the accessibility of a pool from a device.

Returns the accessibility of the pool’s memory from the specified location.

Parameters:
Returns:

  • CUresult

  • flags (CUmemAccess_flags) – the accessibility of the pool from the specified location

cuda.bindings.driver.cuMemPoolCreate(CUmemPoolProps poolProps: Optional[CUmemPoolProps])

Creates a memory pool.

Creates a CUDA memory pool and returns the handle in pool. The poolProps determines the properties of the pool such as the backing device and IPC capabilities.

To create a memory pool targeting a specific host NUMA node, applications must set CUmemPoolProps::CUmemLocation::type to CU_MEM_LOCATION_TYPE_HOST_NUMA and CUmemPoolProps::CUmemLocation::id must specify the NUMA ID of the host memory node. Specifying CU_MEM_LOCATION_TYPE_HOST_NUMA_CURRENT or CU_MEM_LOCATION_TYPE_HOST as the CUmemPoolProps::CUmemLocation::type will result in CUDA_ERROR_INVALID_VALUE. By default, the pool’s memory will be accessible from the device it is allocated on. In the case of pools created with CU_MEM_LOCATION_TYPE_HOST_NUMA, their default accessibility will be from the host CPU. Applications can control the maximum size of the pool by specifying a non-zero value for maxSize. If set to 0, the maximum size of the pool will default to a system dependent value.

Applications can set handleTypes to CU_MEM_HANDLE_TYPE_FABRIC in order to create CUmemoryPool suitable for sharing within an IMEX domain. An IMEX domain is either an OS instance or a group of securely connected OS instances using the NVIDIA IMEX daemon. An IMEX channel is a global resource within the IMEX domain that represents a logical entity that aims to provide fine grained accessibility control for the participating processes. When exporter and importer CUDA processes have been granted access to the same IMEX channel, they can securely share memory. If the allocating process does not have access setup for an IMEX channel, attempting to export a CUmemoryPool with CU_MEM_HANDLE_TYPE_FABRIC will result in CUDA_ERROR_NOT_PERMITTED. The nvidia-modprobe CLI provides more information regarding setting up of IMEX channels.

Parameters:

poolProps (CUmemPoolProps) – None

Returns:

Notes

Specifying CU_MEM_HANDLE_TYPE_NONE creates a memory pool that will not support IPC.

cuda.bindings.driver.cuMemPoolDestroy(pool)

Destroys the specified memory pool.

If any pointers obtained from this pool haven’t been freed or the pool has free operations that haven’t completed when cuMemPoolDestroy is invoked, the function will return immediately and the resources associated with the pool will be released automatically once there are no more outstanding allocations.

Destroying the current mempool of a device sets the default mempool of that device as the current mempool for that device.

Parameters:

pool (CUmemoryPool or cudaMemPool_t) – None

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

Notes

A device’s default memory pool cannot be destroyed.

cuda.bindings.driver.cuMemAllocFromPoolAsync(size_t bytesize, pool, hStream)

Allocates memory from a specified pool with stream ordered semantics.

Inserts an allocation operation into hStream. A pointer to the allocated memory is returned immediately in *dptr. The allocation must not be accessed until the the allocation operation completes. The allocation comes from the specified memory pool.

Parameters:
Returns:

Notes

During stream capture, this function results in the creation of an allocation node. In this case, the allocation is owned by the graph instead of the memory pool. The memory pool’s properties are used to set the node’s creation parameters.

cuda.bindings.driver.cuMemPoolExportToShareableHandle(pool, handleType: CUmemAllocationHandleType, unsigned long long flags)

Exports a memory pool to the requested handle type.

Given an IPC capable mempool, create an OS handle to share the pool with another process. A recipient process can convert the shareable handle into a mempool with cuMemPoolImportFromShareableHandle. Individual pointers can then be shared with the cuMemPoolExportPointer and cuMemPoolImportPointer APIs. The implementation of what the shareable handle is and how it can be transferred is defined by the requested handle type.

Parameters:
Returns:

Notes

: To create an IPC capable mempool, create a mempool with a CUmemAllocationHandleType other than CU_MEM_HANDLE_TYPE_NONE.

cuda.bindings.driver.cuMemPoolImportFromShareableHandle(handle, handleType: CUmemAllocationHandleType, unsigned long long flags)

imports a memory pool from a shared handle.

Specific allocations can be imported from the imported pool with cuMemPoolImportPointer.

If handleType is CU_MEM_HANDLE_TYPE_FABRIC and the importer process has not been granted access to the same IMEX channel as the exporter process, this API will error as CUDA_ERROR_NOT_PERMITTED.

Parameters:
  • handle (Any) – OS handle of the pool to open

  • handleType (CUmemAllocationHandleType) – The type of handle being imported

  • flags (unsigned long long) – must be 0

Returns:

Notes

Imported memory pools do not support creating new allocations. As such imported memory pools may not be used in cuDeviceSetMemPool or cuMemAllocFromPoolAsync calls.

cuda.bindings.driver.cuMemPoolExportPointer(ptr)

Export data to share a memory pool allocation between processes.

Constructs shareData_out for sharing a specific allocation from an already shared memory pool. The recipient process can import the allocation with the cuMemPoolImportPointer api. The data is not a handle and may be shared through any IPC mechanism.

Parameters:

ptr (CUdeviceptr) – pointer to memory being exported

Returns:

cuda.bindings.driver.cuMemPoolImportPointer(pool, CUmemPoolPtrExportData shareData: Optional[CUmemPoolPtrExportData])

Import a memory pool allocation from another process.

Returns in ptr_out a pointer to the imported memory. The imported memory must not be accessed before the allocation operation completes in the exporting process. The imported memory must be freed from all importing processes before being freed in the exporting process. The pointer may be freed with cuMemFree or cuMemFreeAsync. If cuMemFreeAsync is used, the free must be completed on the importing process before the free operation on the exporting process.

Parameters:
Returns:

Notes

The cuMemFreeAsync api may be used in the exporting process before the cuMemFreeAsync operation completes in its stream as long as the cuMemFreeAsync in the exporting process specifies a stream with a stream dependency on the importing process’s cuMemFreeAsync.

Multicast Object Management

This section describes the CUDA multicast object operations exposed by the low-level CUDA driver application programming interface.

overview

A multicast object created via cuMulticastCreate enables certain memory operations to be broadcast to a team of devices. Devices can be added to a multicast object via cuMulticastAddDevice. Memory can be bound on each participating device via either cuMulticastBindMem or cuMulticastBindAddr. Multicast objects can be mapped into a device’s virtual address space using the virtual memmory management APIs (see cuMemMap and cuMemSetAccess).

Supported Platforms

Support for multicast on a specific device can be queried using the device attribute CU_DEVICE_ATTRIBUTE_MULTICAST_SUPPORTED

cuda.bindings.driver.cuMulticastCreate(CUmulticastObjectProp prop: Optional[CUmulticastObjectProp])

Create a generic allocation handle representing a multicast object described by the given properties.

This creates a multicast object as described by prop. The number of participating devices is specified by numDevices. Devices can be added to the multicast object via cuMulticastAddDevice. All participating devices must be added to the multicast object before memory can be bound to it. Memory is bound to the multicast object via either cuMulticastBindMem or cuMulticastBindAddr, and can be unbound via cuMulticastUnbind. The total amount of memory that can be bound per device is specified by pysize. This size must be a multiple of the value returned by cuMulticastGetGranularity with the flag CU_MULTICAST_GRANULARITY_MINIMUM. For best performance however, the size should be aligned to the value returned by cuMulticastGetGranularity with the flag CU_MULTICAST_GRANULARITY_RECOMMENDED.

After all participating devices have been added, multicast objects can also be mapped to a device’s virtual address space using the virtual memory management APIs (see cuMemMap and cuMemSetAccess). Multicast objects can also be shared with other processes by requesting a shareable handle via cuMemExportToShareableHandle. Note that the desired types of shareable handles must be specified in the bitmask handleTypes. Multicast objects can be released using the virtual memory management API cuMemRelease.

Parameters:

prop (CUmulticastObjectProp) – Properties of the multicast object to create.

Returns:

cuda.bindings.driver.cuMulticastAddDevice(mcHandle, dev)

Associate a device to a multicast object.

Associates a device to a multicast object. The added device will be a part of the multicast team of size specified by numDevices during cuMulticastCreate. The association of the device to the multicast object is permanent during the life time of the multicast object. All devices must be added to the multicast team before any memory can be bound to any device in the team. Any calls to cuMulticastBindMem or cuMulticastBindAddr will block until all devices have been added. Similarly all devices must be added to the multicast team before a virtual address range can be mapped to the multicast object. A call to cuMemMap will block until all devices have been added.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_PERMITTED, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuMulticastBindMem(mcHandle, size_t mcOffset, memHandle, size_t memOffset, size_t size, unsigned long long flags)

Bind a memory allocation represented by a handle to a multicast object.

Binds a memory allocation specified by memHandle and created via cuMemCreate to a multicast object represented by mcHandle and created via cuMulticastCreate. The intended size of the bind, the offset in the multicast range mcOffset as well as the offset in the memory memOffset must be a multiple of the value returned by cuMulticastGetGranularity with the flag CU_MULTICAST_GRANULARITY_MINIMUM. For best performance however, size, mcOffset and memOffset should be aligned to the granularity of the memory allocation(see cuMemGetAllocationGranularity) or to the value returned by cuMulticastGetGranularity with the flag CU_MULTICAST_GRANULARITY_RECOMMENDED.

The size + memOffset cannot be larger than the size of the allocated memory. Similarly the size + mcOffset cannot be larger than the size of the multicast object. The memory allocation must have beeen created on one of the devices that was added to the multicast team via cuMulticastAddDevice. Externally shareable as well as imported multicast objects can be bound only to externally shareable memory. Note that this call will return CUDA_ERROR_OUT_OF_MEMORY if there are insufficient resources required to perform the bind. This call may also return CUDA_ERROR_SYSTEM_NOT_READY if the necessary system software is not initialized or running.

Parameters:
  • mcHandle (CUmemGenericAllocationHandle) – Handle representing a multicast object.

  • mcOffset (size_t) – Offset into the multicast object for attachment.

  • memHandle (CUmemGenericAllocationHandle) – Handle representing a memory allocation.

  • memOffset (size_t) – Offset into the memory for attachment.

  • size (size_t) – Size of the memory that will be bound to the multicast object.

  • flags (unsigned long long) – Flags for future use, must be zero for now.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_PERMITTED, CUDA_ERROR_NOT_SUPPORTED, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_SYSTEM_NOT_READY

Return type:

CUresult

cuda.bindings.driver.cuMulticastBindAddr(mcHandle, size_t mcOffset, memptr, size_t size, unsigned long long flags)

Bind a memory allocation represented by a virtual address to a multicast object.

Binds a memory allocation specified by its mapped address memptr to a multicast object represented by mcHandle. The memory must have been allocated via cuMemCreate or cudaMallocAsync. The intended size of the bind, the offset in the multicast range mcOffset and memptr must be a multiple of the value returned by cuMulticastGetGranularity with the flag CU_MULTICAST_GRANULARITY_MINIMUM. For best performance however, size, mcOffset and memptr should be aligned to the value returned by cuMulticastGetGranularity with the flag CU_MULTICAST_GRANULARITY_RECOMMENDED.

The size cannot be larger than the size of the allocated memory. Similarly the size + mcOffset cannot be larger than the total size of the multicast object. The memory allocation must have beeen created on one of the devices that was added to the multicast team via cuMulticastAddDevice. Externally shareable as well as imported multicast objects can be bound only to externally shareable memory. Note that this call will return CUDA_ERROR_OUT_OF_MEMORY if there are insufficient resources required to perform the bind. This call may also return CUDA_ERROR_SYSTEM_NOT_READY if the necessary system software is not initialized or running.

Parameters:
  • mcHandle (CUmemGenericAllocationHandle) – Handle representing a multicast object.

  • mcOffset (size_t) – Offset into multicast va range for attachment.

  • memptr (CUdeviceptr) – Virtual address of the memory allocation.

  • size (size_t) – Size of memory that will be bound to the multicast object.

  • flags (unsigned long long) – Flags for future use, must be zero now.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_PERMITTED, CUDA_ERROR_NOT_SUPPORTED, CUDA_ERROR_OUT_OF_MEMORY, CUDA_ERROR_SYSTEM_NOT_READY

Return type:

CUresult

cuda.bindings.driver.cuMulticastUnbind(mcHandle, dev, size_t mcOffset, size_t size)

Unbind any memory allocations bound to a multicast object at a given offset and upto a given size.

Unbinds any memory allocations hosted on dev and bound to a multicast object at mcOffset and upto a given size. The intended size of the unbind and the offset in the multicast range ( mcOffset ) must be a multiple of the value returned by cuMulticastGetGranularity flag CU_MULTICAST_GRANULARITY_MINIMUM. The size + mcOffset cannot be larger than the total size of the multicast object.

Parameters:
  • mcHandle (CUmemGenericAllocationHandle) – Handle representing a multicast object.

  • dev (CUdevice) – Device that hosts the memory allocation.

  • mcOffset (size_t) – Offset into the multicast object.

  • size (size_t) – Desired size to unbind.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_PERMITTED, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

Notes

Warning: The mcOffset and the size must match the corresponding values specified during the bind call. Any other values may result in undefined behavior.

cuda.bindings.driver.cuMulticastGetGranularity(CUmulticastObjectProp prop: Optional[CUmulticastObjectProp], option: CUmulticastGranularity_flags)

Calculates either the minimal or recommended granularity for multicast object.

Calculates either the minimal or recommended granularity for a given set of multicast object properties and returns it in granularity. This granularity can be used as a multiple for size, bind offsets and address mappings of the multicast object.

Parameters:
Returns:

Unified Addressing

This section describes the unified addressing functions of the low-level CUDA driver application programming interface.

Overview

CUDA devices can share a unified address space with the host. For these devices there is no distinction between a device pointer and a host pointer – the same pointer value may be used to access memory from the host program and from a kernel running on the device (with exceptions enumerated below).

Supported Platforms

Whether or not a device supports unified addressing may be queried by calling cuDeviceGetAttribute() with the device attribute CU_DEVICE_ATTRIBUTE_UNIFIED_ADDRESSING.

Unified addressing is automatically enabled in 64-bit processes

Looking Up Information from Pointer Values

It is possible to look up information about the memory which backs a pointer value. For instance, one may want to know if a pointer points to host or device memory. As another example, in the case of device memory, one may want to know on which CUDA device the memory resides. These properties may be queried using the function cuPointerGetAttribute()

Since pointers are unique, it is not necessary to specify information about the pointers specified to the various copy functions in the CUDA API. The function cuMemcpy() may be used to perform a copy between two pointers, ignoring whether they point to host or device memory (making cuMemcpyHtoD(), cuMemcpyDtoD(), and cuMemcpyDtoH() unnecessary for devices supporting unified addressing). For multidimensional copies, the memory type CU_MEMORYTYPE_UNIFIED may be used to specify that the CUDA driver should infer the location of the pointer from its value.

Automatic Mapping of Host Allocated Host Memory

All host memory allocated in all contexts using cuMemAllocHost() and cuMemHostAlloc() is always directly accessible from all contexts on all devices that support unified addressing. This is the case regardless of whether or not the flags CU_MEMHOSTALLOC_PORTABLE and CU_MEMHOSTALLOC_DEVICEMAP are specified.

The pointer value through which allocated host memory may be accessed in kernels on all devices that support unified addressing is the same as the pointer value through which that memory is accessed on the host, so it is not necessary to call cuMemHostGetDevicePointer() to get the device pointer for these allocations.

Note that this is not the case for memory allocated using the flag CU_MEMHOSTALLOC_WRITECOMBINED, as discussed below.

Automatic Registration of Peer Memory

Upon enabling direct access from a context that supports unified addressing to another peer context that supports unified addressing using cuCtxEnablePeerAccess() all memory allocated in the peer context using cuMemAlloc() and cuMemAllocPitch() will immediately be accessible by the current context. The device pointer value through which any peer memory may be accessed in the current context is the same pointer value through which that memory may be accessed in the peer context.

Exceptions, Disjoint Addressing

Not all memory may be accessed on devices through the same pointer value through which they are accessed on the host. These exceptions are host memory registered using cuMemHostRegister() and host memory allocated using the flag CU_MEMHOSTALLOC_WRITECOMBINED. For these exceptions, there exists a distinct host and device address for the memory. The device address is guaranteed to not overlap any valid host pointer range and is guaranteed to have the same value across all contexts that support unified addressing.

This device address may be queried using cuMemHostGetDevicePointer() when a context using unified addressing is current. Either the host or the unified device pointer value may be used to refer to this memory through cuMemcpy() and similar functions using the CU_MEMORYTYPE_UNIFIED memory type.

cuda.bindings.driver.cuPointerGetAttribute(attribute: CUpointer_attribute, ptr)

Returns information about a pointer.

The supported attributes are:

  • CU_POINTER_ATTRIBUTE_CONTEXT:

  • Returns in *data the CUcontext in which ptr was allocated or registered. The type of data must be CUcontext *.

  • If ptr was not allocated by, mapped by, or registered with a CUcontext which uses unified virtual addressing then CUDA_ERROR_INVALID_VALUE is returned.

  • CU_POINTER_ATTRIBUTE_MEMORY_TYPE:

  • Returns in *data the physical memory type of the memory that ptr addresses as a CUmemorytype enumerated value. The type of data must be unsigned int.

  • If ptr addresses device memory then *data is set to CU_MEMORYTYPE_DEVICE. The particular CUdevice on which the memory resides is the CUdevice of the CUcontext returned by the CU_POINTER_ATTRIBUTE_CONTEXT attribute of ptr.

  • If ptr addresses host memory then *data is set to CU_MEMORYTYPE_HOST.

  • If ptr was not allocated by, mapped by, or registered with a CUcontext which uses unified virtual addressing then CUDA_ERROR_INVALID_VALUE is returned.

  • If the current CUcontext does not support unified virtual addressing then CUDA_ERROR_INVALID_CONTEXT is returned.

  • CU_POINTER_ATTRIBUTE_DEVICE_POINTER:

  • Returns in *data the device pointer value through which ptr may be accessed by kernels running in the current CUcontext. The type of data must be CUdeviceptr *.

  • If there exists no device pointer value through which kernels running in the current CUcontext may access ptr then CUDA_ERROR_INVALID_VALUE is returned.

  • If there is no current CUcontext then CUDA_ERROR_INVALID_CONTEXT is returned.

  • Except in the exceptional disjoint addressing cases discussed below, the value returned in *data will equal the input value ptr.

  • CU_POINTER_ATTRIBUTE_HOST_POINTER:

  • Returns in *data the host pointer value through which ptr may be accessed by by the host program. The type of data must be void **. If there exists no host pointer value through which the host program may directly access ptr then CUDA_ERROR_INVALID_VALUE is returned.

  • Except in the exceptional disjoint addressing cases discussed below, the value returned in *data will equal the input value ptr.

  • CU_POINTER_ATTRIBUTE_P2P_TOKENS:

  • Returns in *data two tokens for use with the nv-p2p.h Linux kernel interface. data must be a struct of type CUDA_POINTER_ATTRIBUTE_P2P_TOKENS.

  • ptr must be a pointer to memory obtained from pycuMemAlloc(). Note that p2pToken and vaSpaceToken are only valid for the lifetime of the source allocation. A subsequent allocation at the same address may return completely different tokens. Querying this attribute has a side effect of setting the attribute CU_POINTER_ATTRIBUTE_SYNC_MEMOPS for the region of memory that ptr points to.

  • CU_POINTER_ATTRIBUTE_SYNC_MEMOPS:

  • A boolean attribute which when set, ensures that synchronous memory operations initiated on the region of memory that ptr points to will always synchronize. See further documentation in the section titled “API synchronization behavior” to learn more about cases when synchronous memory operations can exhibit asynchronous behavior.

  • CU_POINTER_ATTRIBUTE_BUFFER_ID:

  • Returns in *data a buffer ID which is guaranteed to be unique within the process. data must point to an unsigned long long.

  • ptr must be a pointer to memory obtained from a CUDA memory allocation API. Every memory allocation from any of the CUDA memory allocation APIs will have a unique ID over a process lifetime. Subsequent allocations do not reuse IDs from previous freed allocations. IDs are only unique within a single process.

  • CU_POINTER_ATTRIBUTE_IS_MANAGED:

  • Returns in *data a boolean that indicates whether the pointer points to managed memory or not.

  • If ptr is not a valid CUDA pointer then CUDA_ERROR_INVALID_VALUE is returned.

  • CU_POINTER_ATTRIBUTE_DEVICE_ORDINAL:

  • Returns in *data an integer representing a device ordinal of a device against which the memory was allocated or registered.

  • CU_POINTER_ATTRIBUTE_IS_LEGACY_CUDA_IPC_CAPABLE:

  • Returns in *data a boolean that indicates if this pointer maps to an allocation that is suitable for cudaIpcGetMemHandle.

  • CU_POINTER_ATTRIBUTE_RANGE_START_ADDR:

  • Returns in *data the starting address for the allocation referenced by the device pointer ptr. Note that this is not necessarily the address of the mapped region, but the address of the mappable address range ptr references (e.g. from cuMemAddressReserve).

  • CU_POINTER_ATTRIBUTE_RANGE_SIZE:

  • Returns in *data the size for the allocation referenced by the device pointer ptr. Note that this is not necessarily the size of the mapped region, but the size of the mappable address range ptr references (e.g. from cuMemAddressReserve). To retrieve the size of the mapped region, see cuMemGetAddressRange

  • CU_POINTER_ATTRIBUTE_MAPPED:

  • Returns in *data a boolean that indicates if this pointer is in a valid address range that is mapped to a backing allocation.

  • CU_POINTER_ATTRIBUTE_ALLOWED_HANDLE_TYPES:

  • Returns a bitmask of the allowed handle types for an allocation that may be passed to cuMemExportToShareableHandle.

  • CU_POINTER_ATTRIBUTE_MEMPOOL_HANDLE:

  • Returns in *data the handle to the mempool that the allocation was obtained from.

Note that for most allocations in the unified virtual address space the host and device pointer for accessing the allocation will be the same. The exceptions to this are

  • user memory registered using cuMemHostRegister

  • host memory allocated using cuMemHostAlloc with the CU_MEMHOSTALLOC_WRITECOMBINED flag For these types of allocation there will exist separate, disjoint host and device addresses for accessing the allocation. In particular

  • The host address will correspond to an invalid unmapped device address (which will result in an exception if accessed from the device)

  • The device address will correspond to an invalid unmapped host address (which will result in an exception if accessed from the host). For these types of allocations, querying CU_POINTER_ATTRIBUTE_HOST_POINTER and CU_POINTER_ATTRIBUTE_DEVICE_POINTER may be used to retrieve the host and device addresses from either address.

Parameters:
Returns:

cuda.bindings.driver.cuMemPrefetchAsync(devPtr, size_t count, dstDevice, hStream)

Prefetches memory to the specified destination device.

Note there is a later version of this API, cuMemPrefetchAsync_v2. It will supplant this version in 13.0, which is retained for minor version compatibility.

Prefetches memory to the specified destination device. devPtr is the base device pointer of the memory to be prefetched and dstDevice is the destination device. count specifies the number of bytes to copy. hStream is the stream in which the operation is enqueued. The memory range must refer to managed memory allocated via cuMemAllocManaged or declared via managed variables or it may also refer to system-allocated memory on systems with non-zero CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS.

Passing in CU_DEVICE_CPU for dstDevice will prefetch the data to host memory. If dstDevice is a GPU, then the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS must be non- zero. Additionally, hStream must be associated with a device that has a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS.

The start address and end address of the memory range will be rounded down and rounded up respectively to be aligned to CPU page size before the prefetch operation is enqueued in the stream.

If no physical memory has been allocated for this region, then this memory region will be populated and mapped on the destination device. If there’s insufficient memory to prefetch the desired region, the Unified Memory driver may evict pages from other cuMemAllocManaged allocations to host memory in order to make room. Device memory allocated using cuMemAlloc or cuArrayCreate will not be evicted.

By default, any mappings to the previous location of the migrated pages are removed and mappings for the new location are only setup on dstDevice. The exact behavior however also depends on the settings applied to this memory range via cuMemAdvise as described below:

If CU_MEM_ADVISE_SET_READ_MOSTLY was set on any subset of this memory range, then that subset will create a read-only copy of the pages on dstDevice.

If CU_MEM_ADVISE_SET_PREFERRED_LOCATION was called on any subset of this memory range, then the pages will be migrated to dstDevice even if dstDevice is not the preferred location of any pages in the memory range.

If CU_MEM_ADVISE_SET_ACCESSED_BY was called on any subset of this memory range, then mappings to those pages from all the appropriate processors are updated to refer to the new location if establishing such a mapping is possible. Otherwise, those mappings are cleared.

Note that this API is not required for functionality and only serves to improve performance by allowing the application to migrate data to a suitable location before it is accessed. Memory accesses to this range are always coherent and are allowed even when the data is actively being migrated.

Note that this function is asynchronous with respect to the host and all work on other devices.

Parameters:
  • devPtr (CUdeviceptr) – Pointer to be prefetched

  • count (size_t) – Size in bytes

  • dstDevice (CUdevice) – Destination device to prefetch to

  • hStream (CUstream or cudaStream_t) – Stream to enqueue prefetch operation

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

Return type:

CUresult

cuda.bindings.driver.cuMemPrefetchAsync_v2(devPtr, size_t count, CUmemLocation location: CUmemLocation, unsigned int flags, hStream)

Prefetches memory to the specified destination location.

Prefetches memory to the specified destination location. devPtr is the base device pointer of the memory to be prefetched and location specifies the destination location. count specifies the number of bytes to copy. hStream is the stream in which the operation is enqueued. The memory range must refer to managed memory allocated via cuMemAllocManaged or declared via managed variables.

Specifying CU_MEM_LOCATION_TYPE_DEVICE for type will prefetch memory to GPU specified by device ordinal id which must have non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. Additionally, hStream must be associated with a device that has a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. Specifying CU_MEM_LOCATION_TYPE_HOST as type will prefetch data to host memory. Applications can request prefetching memory to a specific host NUMA node by specifying CU_MEM_LOCATION_TYPE_HOST_NUMA for type and a valid host NUMA node id in id Users can also request prefetching memory to the host NUMA node closest to the current thread’s CPU by specifying CU_MEM_LOCATION_TYPE_HOST_NUMA_CURRENT for type. Note when type is etiher CU_MEM_LOCATION_TYPE_HOST OR CU_MEM_LOCATION_TYPE_HOST_NUMA_CURRENT, id will be ignored.

The start address and end address of the memory range will be rounded down and rounded up respectively to be aligned to CPU page size before the prefetch operation is enqueued in the stream.

If no physical memory has been allocated for this region, then this memory region will be populated and mapped on the destination device. If there’s insufficient memory to prefetch the desired region, the Unified Memory driver may evict pages from other cuMemAllocManaged allocations to host memory in order to make room. Device memory allocated using cuMemAlloc or cuArrayCreate will not be evicted.

By default, any mappings to the previous location of the migrated pages are removed and mappings for the new location are only setup on the destination location. The exact behavior however also depends on the settings applied to this memory range via cuMemAdvise as described below:

If CU_MEM_ADVISE_SET_READ_MOSTLY was set on any subset of this memory range, then that subset will create a read-only copy of the pages on destination location. If however the destination location is a host NUMA node, then any pages of that subset that are already in another host NUMA node will be transferred to the destination.

If CU_MEM_ADVISE_SET_PREFERRED_LOCATION was called on any subset of this memory range, then the pages will be migrated to location even if location is not the preferred location of any pages in the memory range.

If CU_MEM_ADVISE_SET_ACCESSED_BY was called on any subset of this memory range, then mappings to those pages from all the appropriate processors are updated to refer to the new location if establishing such a mapping is possible. Otherwise, those mappings are cleared.

Note that this API is not required for functionality and only serves to improve performance by allowing the application to migrate data to a suitable location before it is accessed. Memory accesses to this range are always coherent and are allowed even when the data is actively being migrated.

Note that this function is asynchronous with respect to the host and all work on other devices.

Parameters:
  • devPtr (CUdeviceptr) – Pointer to be prefetched

  • count (size_t) – Size in bytes

  • dstDevice (CUmemLocation) – Destination device to prefetch to

  • flags (unsigned int) – flags for future use, must be zero now.

  • hStream (CUstream or cudaStream_t) – Stream to enqueue prefetch operation

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

Return type:

CUresult

cuda.bindings.driver.cuMemAdvise(devPtr, size_t count, advice: CUmem_advise, device)

Advise about the usage of a given memory range.

Note there is a later version of this API, cuMemAdvise_v2. It will supplant this version in 13.0, which is retained for minor version compatibility.

Advise the Unified Memory subsystem about the usage pattern for the memory range starting at devPtr with a size of count bytes. The start address and end address of the memory range will be rounded down and rounded up respectively to be aligned to CPU page size before the advice is applied. The memory range must refer to managed memory allocated via cuMemAllocManaged or declared via managed variables. The memory range could also refer to system-allocated pageable memory provided it represents a valid, host-accessible region of memory and all additional constraints imposed by advice as outlined below are also satisfied. Specifying an invalid system- allocated pageable memory range results in an error being returned.

The advice parameter can take the following values:

  • CU_MEM_ADVISE_SET_READ_MOSTLY: This implies that the data is mostly going to be read from and only occasionally written to. Any read accesses from any processor to this region will create a read- only copy of at least the accessed pages in that processor’s memory. Additionally, if cuMemPrefetchAsync is called on this region, it will create a read-only copy of the data on the destination processor. If any processor writes to this region, all copies of the corresponding page will be invalidated except for the one where the write occurred. The device argument is ignored for this advice. Note that for a page to be read-duplicated, the accessing processor must either be the CPU or a GPU that has a non- zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. Also, if a context is created on a device that does not have the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS set, then read-duplication will not occur until all such contexts are destroyed. If the memory region refers to valid system-allocated pageable memory, then the accessing device must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS for a read- only copy to be created on that device. Note however that if the accessing device also has a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES, then setting this advice will not create a read-only copy when that device accesses this memory region.

  • CU_MEM_ADVISE_UNSET_READ_MOSTLY: Undoes the effect of CU_MEM_ADVISE_SET_READ_MOSTLY and also prevents the Unified Memory driver from attempting heuristic read-duplication on the memory range. Any read-duplicated copies of the data will be collapsed into a single copy. The location for the collapsed copy will be the preferred location if the page has a preferred location and one of the read-duplicated copies was resident at that location. Otherwise, the location chosen is arbitrary.

  • CU_MEM_ADVISE_SET_PREFERRED_LOCATION: This advice sets the preferred location for the data to be the memory belonging to device. Passing in CU_DEVICE_CPU for device sets the preferred location as host memory. If device is a GPU, then it must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. Setting the preferred location does not cause data to migrate to that location immediately. Instead, it guides the migration policy when a fault occurs on that memory region. If the data is already in its preferred location and the faulting processor can establish a mapping without requiring the data to be migrated, then data migration will be avoided. On the other hand, if the data is not in its preferred location or if a direct mapping cannot be established, then it will be migrated to the processor accessing it. It is important to note that setting the preferred location does not prevent data prefetching done using cuMemPrefetchAsync. Having a preferred location can override the page thrash detection and resolution logic in the Unified Memory driver. Normally, if a page is detected to be constantly thrashing between for example host and device memory, the page may eventually be pinned to host memory by the Unified Memory driver. But if the preferred location is set as device memory, then the page will continue to thrash indefinitely. If CU_MEM_ADVISE_SET_READ_MOSTLY is also set on this memory region or any subset of it, then the policies associated with that advice will override the policies of this advice, unless read accesses from device will not result in a read-only copy being created on that device as outlined in description for the advice CU_MEM_ADVISE_SET_READ_MOSTLY. If the memory region refers to valid system-allocated pageable memory, then device must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS.

  • CU_MEM_ADVISE_UNSET_PREFERRED_LOCATION: Undoes the effect of CU_MEM_ADVISE_SET_PREFERRED_LOCATION and changes the preferred location to none.

  • CU_MEM_ADVISE_SET_ACCESSED_BY: This advice implies that the data will be accessed by device. Passing in CU_DEVICE_CPU for device will set the advice for the CPU. If device is a GPU, then the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS must be non-zero. This advice does not cause data migration and has no impact on the location of the data per se. Instead, it causes the data to always be mapped in the specified processor’s page tables, as long as the location of the data permits a mapping to be established. If the data gets migrated for any reason, the mappings are updated accordingly. This advice is recommended in scenarios where data locality is not important, but avoiding faults is. Consider for example a system containing multiple GPUs with peer-to-peer access enabled, where the data located on one GPU is occasionally accessed by peer GPUs. In such scenarios, migrating data over to the other GPUs is not as important because the accesses are infrequent and the overhead of migration may be too high. But preventing faults can still help improve performance, and so having a mapping set up in advance is useful. Note that on CPU access of this data, the data may be migrated to host memory because the CPU typically cannot access device memory directly. Any GPU that had the CU_MEM_ADVISE_SET_ACCESSED_BY flag set for this data will now have its mapping updated to point to the page in host memory. If CU_MEM_ADVISE_SET_READ_MOSTLY is also set on this memory region or any subset of it, then the policies associated with that advice will override the policies of this advice. Additionally, if the preferred location of this memory region or any subset of it is also device, then the policies associated with CU_MEM_ADVISE_SET_PREFERRED_LOCATION will override the policies of this advice. If the memory region refers to valid system- allocated pageable memory, then device must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS. Additionally, if device has a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES, then this call has no effect.

  • CU_MEM_ADVISE_UNSET_ACCESSED_BY: Undoes the effect of CU_MEM_ADVISE_SET_ACCESSED_BY. Any mappings to the data from device may be removed at any time causing accesses to result in non-fatal page faults. If the memory region refers to valid system-allocated pageable memory, then device must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS. Additionally, if device has a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES, then this call has no effect.

Parameters:
  • devPtr (CUdeviceptr) – Pointer to memory to set the advice for

  • count (size_t) – Size in bytes of the memory range

  • advice (CUmem_advise) – Advice to be applied for the specified memory range

  • device (CUdevice) – Device to apply the advice for

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

Return type:

CUresult

cuda.bindings.driver.cuMemAdvise_v2(devPtr, size_t count, advice: CUmem_advise, CUmemLocation location: CUmemLocation)

Advise about the usage of a given memory range.

Advise the Unified Memory subsystem about the usage pattern for the memory range starting at devPtr with a size of count bytes. The start address and end address of the memory range will be rounded down and rounded up respectively to be aligned to CPU page size before the advice is applied. The memory range must refer to managed memory allocated via cuMemAllocManaged or declared via managed variables. The memory range could also refer to system-allocated pageable memory provided it represents a valid, host-accessible region of memory and all additional constraints imposed by advice as outlined below are also satisfied. Specifying an invalid system- allocated pageable memory range results in an error being returned.

The advice parameter can take the following values:

  • CU_MEM_ADVISE_SET_READ_MOSTLY: This implies that the data is mostly going to be read from and only occasionally written to. Any read accesses from any processor to this region will create a read- only copy of at least the accessed pages in that processor’s memory. Additionally, if cuMemPrefetchAsync or cuMemPrefetchAsync_v2 is called on this region, it will create a read-only copy of the data on the destination processor. If the target location for cuMemPrefetchAsync_v2 is a host NUMA node and a read-only copy already exists on another host NUMA node, that copy will be migrated to the targeted host NUMA node. If any processor writes to this region, all copies of the corresponding page will be invalidated except for the one where the write occurred. If the writing processor is the CPU and the preferred location of the page is a host NUMA node, then the page will also be migrated to that host NUMA node. The location argument is ignored for this advice. Note that for a page to be read-duplicated, the accessing processor must either be the CPU or a GPU that has a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. Also, if a context is created on a device that does not have the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS set, then read-duplication will not occur until all such contexts are destroyed. If the memory region refers to valid system-allocated pageable memory, then the accessing device must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS for a read- only copy to be created on that device. Note however that if the accessing device also has a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES, then setting this advice will not create a read-only copy when that device accesses this memory region.

  • CU_MEM_ADVISE_UNSET_READ_MOSTLY: Undoes the effect of CU_MEM_ADVISE_SET_READ_MOSTLY and also prevents the Unified Memory driver from attempting heuristic read-duplication on the memory range. Any read-duplicated copies of the data will be collapsed into a single copy. The location for the collapsed copy will be the preferred location if the page has a preferred location and one of the read-duplicated copies was resident at that location. Otherwise, the location chosen is arbitrary. Note: The location argument is ignored for this advice.

  • CU_MEM_ADVISE_SET_PREFERRED_LOCATION: This advice sets the preferred location for the data to be the memory belonging to location. When type is CU_MEM_LOCATION_TYPE_HOST, id is ignored and the preferred location is set to be host memory. To set the preferred location to a specific host NUMA node, applications must set type to CU_MEM_LOCATION_TYPE_HOST_NUMA and id must specify the NUMA ID of the host NUMA node. If type is set to CU_MEM_LOCATION_TYPE_HOST_NUMA_CURRENT, id will be ignored and the the host NUMA node closest to the calling thread’s CPU will be used as the preferred location. If type is a CU_MEM_LOCATION_TYPE_DEVICE, then id must be a valid device ordinal and the device must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. Setting the preferred location does not cause data to migrate to that location immediately. Instead, it guides the migration policy when a fault occurs on that memory region. If the data is already in its preferred location and the faulting processor can establish a mapping without requiring the data to be migrated, then data migration will be avoided. On the other hand, if the data is not in its preferred location or if a direct mapping cannot be established, then it will be migrated to the processor accessing it. It is important to note that setting the preferred location does not prevent data prefetching done using cuMemPrefetchAsync. Having a preferred location can override the page thrash detection and resolution logic in the Unified Memory driver. Normally, if a page is detected to be constantly thrashing between for example host and device memory, the page may eventually be pinned to host memory by the Unified Memory driver. But if the preferred location is set as device memory, then the page will continue to thrash indefinitely. If CU_MEM_ADVISE_SET_READ_MOSTLY is also set on this memory region or any subset of it, then the policies associated with that advice will override the policies of this advice, unless read accesses from location will not result in a read-only copy being created on that procesor as outlined in description for the advice CU_MEM_ADVISE_SET_READ_MOSTLY. If the memory region refers to valid system-allocated pageable memory, and type is CU_MEM_LOCATION_TYPE_DEVICE then id must be a valid device that has a non- zero alue for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS.

  • CU_MEM_ADVISE_UNSET_PREFERRED_LOCATION: Undoes the effect of CU_MEM_ADVISE_SET_PREFERRED_LOCATION and changes the preferred location to none. The location argument is ignored for this advice.

  • CU_MEM_ADVISE_SET_ACCESSED_BY: This advice implies that the data will be accessed by processor location. The type must be either CU_MEM_LOCATION_TYPE_DEVICE with id representing a valid device ordinal or CU_MEM_LOCATION_TYPE_HOST and id will be ignored. All other location types are invalid. If id is a GPU, then the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS must be non-zero. This advice does not cause data migration and has no impact on the location of the data per se. Instead, it causes the data to always be mapped in the specified processor’s page tables, as long as the location of the data permits a mapping to be established. If the data gets migrated for any reason, the mappings are updated accordingly. This advice is recommended in scenarios where data locality is not important, but avoiding faults is. Consider for example a system containing multiple GPUs with peer-to-peer access enabled, where the data located on one GPU is occasionally accessed by peer GPUs. In such scenarios, migrating data over to the other GPUs is not as important because the accesses are infrequent and the overhead of migration may be too high. But preventing faults can still help improve performance, and so having a mapping set up in advance is useful. Note that on CPU access of this data, the data may be migrated to host memory because the CPU typically cannot access device memory directly. Any GPU that had the CU_MEM_ADVISE_SET_ACCESSED_BY flag set for this data will now have its mapping updated to point to the page in host memory. If CU_MEM_ADVISE_SET_READ_MOSTLY is also set on this memory region or any subset of it, then the policies associated with that advice will override the policies of this advice. Additionally, if the preferred location of this memory region or any subset of it is also location, then the policies associated with CU_MEM_ADVISE_SET_PREFERRED_LOCATION will override the policies of this advice. If the memory region refers to valid system- allocated pageable memory, and type is CU_MEM_LOCATION_TYPE_DEVICE then device in id must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS. Additionally, if id has a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES, then this call has no effect.

  • CU_MEM_ADVISE_UNSET_ACCESSED_BY: Undoes the effect of CU_MEM_ADVISE_SET_ACCESSED_BY. Any mappings to the data from location may be removed at any time causing accesses to result in non-fatal page faults. If the memory region refers to valid system-allocated pageable memory, and type is CU_MEM_LOCATION_TYPE_DEVICE then device in id must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS. Additionally, if id has a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES, then this call has no effect.

Parameters:
  • devPtr (CUdeviceptr) – Pointer to memory to set the advice for

  • count (size_t) – Size in bytes of the memory range

  • advice (CUmem_advise) – Advice to be applied for the specified memory range

  • location (CUmemLocation) – location to apply the advice for

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

Return type:

CUresult

cuda.bindings.driver.cuMemRangeGetAttribute(size_t dataSize, attribute: CUmem_range_attribute, devPtr, size_t count)

Query an attribute of a given memory range.

Query an attribute about the memory range starting at devPtr with a size of count bytes. The memory range must refer to managed memory allocated via cuMemAllocManaged or declared via managed variables.

The attribute parameter can take the following values:

  • CU_MEM_RANGE_ATTRIBUTE_READ_MOSTLY: If this attribute is specified, data will be interpreted as a 32-bit integer, and dataSize must be 4. The result returned will be 1 if all pages in the given memory range have read-duplication enabled, or 0 otherwise.

  • CU_MEM_RANGE_ATTRIBUTE_PREFERRED_LOCATION: If this attribute is specified, data will be interpreted as a 32-bit integer, and dataSize must be 4. The result returned will be a GPU device id if all pages in the memory range have that GPU as their preferred location, or it will be CU_DEVICE_CPU if all pages in the memory range have the CPU as their preferred location, or it will be CU_DEVICE_INVALID if either all the pages don’t have the same preferred location or some of the pages don’t have a preferred location at all. Note that the actual location of the pages in the memory range at the time of the query may be different from the preferred location.

  • CU_MEM_RANGE_ATTRIBUTE_ACCESSED_BY: If this attribute is specified, data will be interpreted as an array of 32-bit integers, and dataSize must be a non-zero multiple of 4. The result returned will be a list of device ids that had CU_MEM_ADVISE_SET_ACCESSED_BY set for that entire memory range. If any device does not have that advice set for the entire memory range, that device will not be included. If data is larger than the number of devices that have that advice set for that memory range, CU_DEVICE_INVALID will be returned in all the extra space provided. For ex., if dataSize is 12 (i.e. data has 3 elements) and only device 0 has the advice set, then the result returned will be { 0, CU_DEVICE_INVALID, CU_DEVICE_INVALID }. If data is smaller than the number of devices that have that advice set, then only as many devices will be returned as can fit in the array. There is no guarantee on which specific devices will be returned, however.

  • CU_MEM_RANGE_ATTRIBUTE_LAST_PREFETCH_LOCATION: If this attribute is specified, data will be interpreted as a 32-bit integer, and dataSize must be 4. The result returned will be the last location to which all pages in the memory range were prefetched explicitly via cuMemPrefetchAsync. This will either be a GPU id or CU_DEVICE_CPU depending on whether the last location for prefetch was a GPU or the CPU respectively. If any page in the memory range was never explicitly prefetched or if all pages were not prefetched to the same location, CU_DEVICE_INVALID will be returned. Note that this simply returns the last location that the application requested to prefetch the memory range to. It gives no indication as to whether the prefetch operation to that location has completed or even begun.

  • CU_MEM_RANGE_ATTRIBUTE_PREFERRED_LOCATION_TYPE: If this attribute is specified, data will be interpreted as a CUmemLocationType, and dataSize must be sizeof(CUmemLocationType). The CUmemLocationType returned will be CU_MEM_LOCATION_TYPE_DEVICE if all pages in the memory range have the same GPU as their preferred location, or CUmemLocationType will be CU_MEM_LOCATION_TYPE_HOST if all pages in the memory range have the CPU as their preferred location, or it will be CU_MEM_LOCATION_TYPE_HOST_NUMA if all the pages in the memory range have the same host NUMA node ID as their preferred location or it will be CU_MEM_LOCATION_TYPE_INVALID if either all the pages don’t have the same preferred location or some of the pages don’t have a preferred location at all. Note that the actual location type of the pages in the memory range at the time of the query may be different from the preferred location type.

  • CU_MEM_RANGE_ATTRIBUTE_LAST_PREFETCH_LOCATION_TYPE: If this attribute is specified, data will be interpreted as a CUmemLocationType, and dataSize must be sizeof(CUmemLocationType). The result returned will be the last location to which all pages in the memory range were prefetched explicitly via cuMemPrefetchAsync. The CUmemLocationType returned will be CU_MEM_LOCATION_TYPE_DEVICE if the last prefetch location was a GPU or CU_MEM_LOCATION_TYPE_HOST if it was the CPU or CU_MEM_LOCATION_TYPE_HOST_NUMA if the last prefetch location was a specific host NUMA node. If any page in the memory range was never explicitly prefetched or if all pages were not prefetched to the same location, CUmemLocationType will be CU_MEM_LOCATION_TYPE_INVALID. Note that this simply returns the last location type that the application requested to prefetch the memory range to. It gives no indication as to whether the prefetch operation to that location has completed or even begun.

Parameters:
  • dataSize (size_t) – Array containing the size of data

  • attribute (CUmem_range_attribute) – The attribute to query

  • devPtr (CUdeviceptr) – Start of the range to query

  • count (size_t) – Size of the range to query

Returns:

cuda.bindings.driver.cuMemRangeGetAttributes(dataSizes: Tuple[int] | List[int], attributes: Optional[Tuple[CUmem_range_attribute] | List[CUmem_range_attribute]], size_t numAttributes, devPtr, size_t count)

Query attributes of a given memory range.

Query attributes of the memory range starting at devPtr with a size of count bytes. The memory range must refer to managed memory allocated via cuMemAllocManaged or declared via managed variables. The attributes array will be interpreted to have numAttributes entries. The dataSizes array will also be interpreted to have numAttributes entries. The results of the query will be stored in data.

The list of supported attributes are given below. Please refer to cuMemRangeGetAttribute for attribute descriptions and restrictions.

Parameters:
  • dataSizes (List[int]) – Array containing the sizes of each result

  • attributes (List[CUmem_range_attribute]) – An array of attributes to query (numAttributes and the number of attributes in this array should match)

  • numAttributes (size_t) – Number of attributes to query

  • devPtr (CUdeviceptr) – Start of the range to query

  • count (size_t) – Size of the range to query

Returns:

cuda.bindings.driver.cuPointerSetAttribute(value, attribute: CUpointer_attribute, ptr)

Set attributes on a previously allocated memory region.

The supported attributes are:

  • CU_POINTER_ATTRIBUTE_SYNC_MEMOPS:

  • A boolean attribute that can either be set (1) or unset (0). When set, the region of memory that ptr points to is guaranteed to always synchronize memory operations that are synchronous. If there are some previously initiated synchronous memory operations that are pending when this attribute is set, the function does not return until those memory operations are complete. See further documentation in the section titled “API synchronization behavior” to learn more about cases when synchronous memory operations can exhibit asynchronous behavior. value will be considered as a pointer to an unsigned integer to which this attribute is to be set.

Parameters:
  • value (Any) – Pointer to memory containing the value to be set

  • attribute (CUpointer_attribute) – Pointer attribute to set

  • ptr (CUdeviceptr) – Pointer to a memory region allocated using CUDA memory allocation APIs

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_DEVICE

Return type:

CUresult

cuda.bindings.driver.cuPointerGetAttributes(unsigned int numAttributes, attributes: Optional[Tuple[CUpointer_attribute] | List[CUpointer_attribute]], ptr)

Returns information about a pointer.

The supported attributes are (refer to cuPointerGetAttribute for attribute descriptions and restrictions):

Unlike cuPointerGetAttribute, this function will not return an error when the ptr encountered is not a valid CUDA pointer. Instead, the attributes are assigned default NULL values and CUDA_SUCCESS is returned.

If ptr was not allocated by, mapped by, or registered with a CUcontext which uses UVA (Unified Virtual Addressing), CUDA_ERROR_INVALID_CONTEXT is returned.

Parameters:
  • numAttributes (unsigned int) – Number of attributes to query

  • attributes (List[CUpointer_attribute]) – An array of attributes to query (numAttributes and the number of attributes in this array should match)

  • ptr (CUdeviceptr) – Pointer to query

Returns:

Stream Management

This section describes the stream management functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuStreamCreate(unsigned int Flags)

Create a stream.

Creates a stream and returns a handle in phStream. The Flags argument determines behaviors of the stream.

Valid values for Flags are:

  • CU_STREAM_DEFAULT: Default stream creation flag.

  • CU_STREAM_NON_BLOCKING: Specifies that work running in the created stream may run concurrently with work in stream 0 (the NULL stream), and that the created stream should perform no implicit synchronization with stream 0.

Parameters:

Flags (unsigned int) – Parameters for stream creation

Returns:

cuda.bindings.driver.cuStreamCreateWithPriority(unsigned int flags, int priority)

Create a stream with the given priority.

Creates a stream with the specified priority and returns a handle in phStream. This affects the scheduling priority of work in the stream. Priorities provide a hint to preferentially run work with higher priority when possible, but do not preempt already-running work or provide any other functional guarantee on execution order.

priority follows a convention where lower numbers represent higher priorities. ‘0’ represents default priority. The range of meaningful numerical priorities can be queried using cuCtxGetStreamPriorityRange. If the specified priority is outside the numerical range returned by cuCtxGetStreamPriorityRange, it will automatically be clamped to the lowest or the highest number in the range.

Parameters:
  • flags (unsigned int) – Flags for stream creation. See cuStreamCreate for a list of valid flags

  • priority (int) – Stream priority. Lower numbers represent higher priorities. See cuCtxGetStreamPriorityRange for more information about meaningful stream priorities that can be passed.

Returns:

Notes

Stream priorities are supported only on GPUs with compute capability 3.5 or higher.

In the current implementation, only compute kernels launched in priority streams are affected by the stream’s priority. Stream priorities have no effect on host-to-device and device-to-host memory operations.

cuda.bindings.driver.cuStreamGetPriority(hStream)

Query the priority of a given stream.

Query the priority of a stream created using cuStreamCreate, cuStreamCreateWithPriority or cuGreenCtxStreamCreate and return the priority in priority. Note that if the stream was created with a priority outside the numerical range returned by cuCtxGetStreamPriorityRange, this function returns the clamped priority. See cuStreamCreateWithPriority for details about priority clamping.

Parameters:

hStream (CUstream or cudaStream_t) – Handle to the stream to be queried

Returns:

cuda.bindings.driver.cuStreamGetFlags(hStream)

Query the flags of a given stream.

Query the flags of a stream created using cuStreamCreate, cuStreamCreateWithPriority or cuGreenCtxStreamCreate and return the flags in flags.

Parameters:

hStream (CUstream or cudaStream_t) – Handle to the stream to be queried

Returns:

cuda.bindings.driver.cuStreamGetId(hStream)

Returns the unique Id associated with the stream handle supplied.

Returns in streamId the unique Id which is associated with the given stream handle. The Id is unique for the life of the program.

The stream handle hStream can refer to any of the following:

Parameters:

hStream (CUstream or cudaStream_t) – Handle to the stream to be queried

Returns:

cuda.bindings.driver.cuStreamGetCtx(hStream)

Query the context associated with a stream.

Returns the CUDA context that the stream is associated with.

Note there is a later version of this API, cuStreamGetCtx_v2. It will supplant this version in CUDA 13.0. It is recommended to use cuStreamGetCtx_v2 till then as this version will return CUDA_ERROR_NOT_SUPPORTED for streams created via the API cuGreenCtxStreamCreate.

The stream handle hStream can refer to any of the following:

Parameters:

hStream (CUstream or cudaStream_t) – Handle to the stream to be queried

Returns:

cuda.bindings.driver.cuStreamGetCtx_v2(hStream)

Query the contexts associated with a stream.

Returns the contexts that the stream is associated with.

If the stream is associated with a green context, the API returns the green context in pGreenCtx and the primary context of the associated device in pCtx.

If the stream is associated with a regular context, the API returns the regular context in pCtx and NULL in pGreenCtx.

The stream handle hStream can refer to any of the following:

Parameters:

hStream (CUstream or cudaStream_t) – Handle to the stream to be queried

Returns:

cuda.bindings.driver.cuStreamWaitEvent(hStream, hEvent, unsigned int Flags)

Make a compute stream wait on an event.

Makes all future work submitted to hStream wait for all work captured in hEvent. See cuEventRecord() for details on what is captured by an event. The synchronization will be performed efficiently on the device when applicable. hEvent may be from a different context or device than hStream.

flags include:

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE,

Return type:

CUresult

cuda.bindings.driver.cuStreamAddCallback(hStream, callback, userData, unsigned int flags)

Add a callback to a compute stream.

Adds a callback to be called on the host after all currently enqueued items in the stream have completed. For each cuStreamAddCallback call, the callback will be executed exactly once. The callback will block later work in the stream until it is finished.

The callback may be passed CUDA_SUCCESS or an error code. In the event of a device error, all subsequently executed callbacks will receive an appropriate CUresult.

Callbacks must not make any CUDA API calls. Attempting to use a CUDA API will result in CUDA_ERROR_NOT_PERMITTED. Callbacks must not perform any synchronization that may depend on outstanding device work or other callbacks that are not mandated to run earlier. Callbacks without a mandated order (in independent streams) execute in undefined order and may be serialized.

For the purposes of Unified Memory, callback execution makes a number of guarantees:

  • The callback stream is considered idle for the duration of the callback. Thus, for example, a callback may always use memory attached to the callback stream.

  • The start of execution of a callback has the same effect as synchronizing an event recorded in the same stream immediately prior to the callback. It thus synchronizes streams which have been “joined” prior to the callback.

  • Adding device work to any stream does not have the effect of making the stream active until all preceding host functions and stream callbacks have executed. Thus, for example, a callback might use global attached memory even if work has been added to another stream, if the work has been ordered behind the callback with an event.

  • Completion of a callback does not cause a stream to become active except as described above. The callback stream will remain idle if no device work follows the callback, and will remain idle across consecutive callbacks without device work in between. Thus, for example, stream synchronization can be done by signaling from a callback at the end of the stream.

Parameters:
  • hStream (CUstream or cudaStream_t) – Stream to add callback to

  • callback (CUstreamCallback) – The function to call once preceding stream operations are complete

  • userData (Any) – User specified data to be passed to the callback function

  • flags (unsigned int) – Reserved for future use, must be 0

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

Notes

This function is slated for eventual deprecation and removal. If you do not require the callback to execute in case of a device error, consider using cuLaunchHostFunc. Additionally, this function is not supported with cuStreamBeginCapture and cuStreamEndCapture, unlike cuLaunchHostFunc.

cuda.bindings.driver.cuStreamBeginCapture(hStream, mode: CUstreamCaptureMode)

Begins graph capture on a stream.

Begin graph capture on hStream. When a stream is in capture mode, all operations pushed into the stream will not be executed, but will instead be captured into a graph, which will be returned via cuStreamEndCapture. Capture may not be initiated if stream is CU_STREAM_LEGACY. Capture must be ended on the same stream in which it was initiated, and it may only be initiated if the stream is not already in capture mode. The capture mode may be queried via cuStreamIsCapturing. A unique id representing the capture sequence may be queried via cuStreamGetCaptureInfo.

If mode is not CU_STREAM_CAPTURE_MODE_RELAXED, cuStreamEndCapture must be called on this stream from the same thread.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

Notes

Kernels captured using this API must not use texture and surface references. Reading or writing through any texture or surface reference is undefined behavior. This restriction does not apply to texture and surface objects.

cuda.bindings.driver.cuStreamBeginCaptureToGraph(hStream, hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], dependencyData: Optional[Tuple[CUgraphEdgeData] | List[CUgraphEdgeData]], size_t numDependencies, mode: CUstreamCaptureMode)

Begins graph capture on a stream to an existing graph.

Begin graph capture on hStream, placing new nodes into an existing graph. When a stream is in capture mode, all operations pushed into the stream will not be executed, but will instead be captured into hGraph. The graph will not be instantiable until the user calls cuStreamEndCapture.

Capture may not be initiated if stream is CU_STREAM_LEGACY. Capture must be ended on the same stream in which it was initiated, and it may only be initiated if the stream is not already in capture mode. The capture mode may be queried via cuStreamIsCapturing. A unique id representing the capture sequence may be queried via cuStreamGetCaptureInfo.

If mode is not CU_STREAM_CAPTURE_MODE_RELAXED, cuStreamEndCapture must be called on this stream from the same thread.

Parameters:
  • hStream (CUstream or cudaStream_t) – Stream in which to initiate capture.

  • hGraph (CUgraph or cudaGraph_t) – Graph to capture into.

  • dependencies (List[CUgraphNode]) – Dependencies of the first node captured in the stream. Can be NULL if numDependencies is 0.

  • dependencyData (List[CUgraphEdgeData]) – Optional array of data associated with each dependency.

  • numDependencies (size_t) – Number of dependencies.

  • mode (CUstreamCaptureMode) – Controls the interaction of this capture sequence with other API calls that are potentially unsafe. For more details see cuThreadExchangeStreamCaptureMode.

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

Notes

Kernels captured using this API must not use texture and surface references. Reading or writing through any texture or surface reference is undefined behavior. This restriction does not apply to texture and surface objects.

cuda.bindings.driver.cuThreadExchangeStreamCaptureMode(mode: CUstreamCaptureMode)

Swaps the stream capture interaction mode for a thread.

Sets the calling thread’s stream capture interaction mode to the value contained in *mode, and overwrites *mode with the previous mode for the thread. To facilitate deterministic behavior across function or module boundaries, callers are encouraged to use this API in a push-pop fashion:

View CUDA Toolkit Documentation for a C++ code example

During stream capture (see cuStreamBeginCapture), some actions, such as a call to cudaMalloc, may be unsafe. In the case of cudaMalloc, the operation is not enqueued asynchronously to a stream, and is not observed by stream capture. Therefore, if the sequence of operations captured via cuStreamBeginCapture depended on the allocation being replayed whenever the graph is launched, the captured graph would be invalid.

Therefore, stream capture places restrictions on API calls that can be made within or concurrently to a cuStreamBeginCapture-cuStreamEndCapture sequence. This behavior can be controlled via this API and flags to cuStreamBeginCapture.

A thread’s mode is one of the following:

  • CU_STREAM_CAPTURE_MODE_GLOBAL: This is the default mode. If the local thread has an ongoing capture sequence that was not initiated with CU_STREAM_CAPTURE_MODE_RELAXED at cuStreamBeginCapture, or if any other thread has a concurrent capture sequence initiated with CU_STREAM_CAPTURE_MODE_GLOBAL, this thread is prohibited from potentially unsafe API calls.

  • CU_STREAM_CAPTURE_MODE_THREAD_LOCAL: If the local thread has an ongoing capture sequence not initiated with CU_STREAM_CAPTURE_MODE_RELAXED, it is prohibited from potentially unsafe API calls. Concurrent capture sequences in other threads are ignored.

  • CU_STREAM_CAPTURE_MODE_RELAXED: The local thread is not prohibited from potentially unsafe API calls. Note that the thread is still prohibited from API calls which necessarily conflict with stream capture, for example, attempting cuEventQuery on an event that was last recorded inside a capture sequence.

Parameters:

mode (CUstreamCaptureMode) – Pointer to mode value to swap with the current mode

Returns:

cuda.bindings.driver.cuStreamEndCapture(hStream)

Ends capture on a stream, returning the captured graph.

End capture on hStream, returning the captured graph via phGraph. Capture must have been initiated on hStream via a call to cuStreamBeginCapture. If capture was invalidated, due to a violation of the rules of stream capture, then a NULL graph will be returned.

If the mode argument to cuStreamBeginCapture was not CU_STREAM_CAPTURE_MODE_RELAXED, this call must be from the same thread as cuStreamBeginCapture.

Parameters:

hStream (CUstream or cudaStream_t) – Stream to query

Returns:

cuda.bindings.driver.cuStreamIsCapturing(hStream)

Returns a stream’s capture status.

Return the capture status of hStream via captureStatus. After a successful call, *captureStatus will contain one of the following:

Note that, if this is called on CU_STREAM_LEGACY (the “null stream”) while a blocking stream in the same context is capturing, it will return CUDA_ERROR_STREAM_CAPTURE_IMPLICIT and *captureStatus is unspecified after the call. The blocking stream capture is not invalidated.

When a blocking stream is capturing, the legacy stream is in an unusable state until the blocking stream capture is terminated. The legacy stream is not supported for stream capture, but attempted use would have an implicit dependency on the capturing stream(s).

Parameters:

hStream (CUstream or cudaStream_t) – Stream to query

Returns:

cuda.bindings.driver.cuStreamGetCaptureInfo(hStream)

Query a stream’s capture state.

Query stream state related to stream capture.

If called on CU_STREAM_LEGACY (the “null stream”) while a stream not created with CU_STREAM_NON_BLOCKING is capturing, returns CUDA_ERROR_STREAM_CAPTURE_IMPLICIT.

Valid data (other than capture status) is returned only if both of the following are true:

Parameters:

hStream (CUstream or cudaStream_t) – The stream to query

Returns:

  • CUresultCUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_STREAM_CAPTURE_IMPLICIT

  • captureStatus_out (CUstreamCaptureStatus) – Location to return the capture status of the stream; required

  • id_out (cuuint64_t) – Optional location to return an id for the capture sequence, which is unique over the lifetime of the process

  • graph_out (CUgraph) – Optional location to return the graph being captured into. All operations other than destroy and node removal are permitted on the graph while the capture sequence is in progress. This API does not transfer ownership of the graph, which is transferred or destroyed at cuStreamEndCapture. Note that the graph handle may be invalidated before end of capture for certain errors. Nodes that are or become unreachable from the original stream at cuStreamEndCapture due to direct actions on the graph do not trigger CUDA_ERROR_STREAM_CAPTURE_UNJOINED.

  • dependencies_out (List[CUgraphNode]) – Optional location to store a pointer to an array of nodes. The next node to be captured in the stream will depend on this set of nodes, absent operations such as event wait which modify this set. The array pointer is valid until the next API call which operates on the stream or until the capture is terminated. The node handles may be copied out and are valid until they or the graph is destroyed. The driver-owned array may also be passed directly to APIs that operate on the graph (not the stream) without copying.

  • numDependencies_out (int) – Optional location to store the size of the array returned in dependencies_out.

cuda.bindings.driver.cuStreamGetCaptureInfo_v3(hStream)

Query a stream’s capture state (12.3+)

Query stream state related to stream capture.

If called on CU_STREAM_LEGACY (the “null stream”) while a stream not created with CU_STREAM_NON_BLOCKING is capturing, returns CUDA_ERROR_STREAM_CAPTURE_IMPLICIT.

Valid data (other than capture status) is returned only if both of the following are true:

If edgeData_out is non-NULL then dependencies_out must be as well. If dependencies_out is non-NULL and edgeData_out is NULL, but there is non-zero edge data for one or more of the current stream dependencies, the call will return CUDA_ERROR_LOSSY_QUERY.

Parameters:

hStream (CUstream or cudaStream_t) – The stream to query

Returns:

  • CUresultCUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_STREAM_CAPTURE_IMPLICIT, CUDA_ERROR_LOSSY_QUERY

  • captureStatus_out (CUstreamCaptureStatus) – Location to return the capture status of the stream; required

  • id_out (cuuint64_t) – Optional location to return an id for the capture sequence, which is unique over the lifetime of the process

  • graph_out (CUgraph) – Optional location to return the graph being captured into. All operations other than destroy and node removal are permitted on the graph while the capture sequence is in progress. This API does not transfer ownership of the graph, which is transferred or destroyed at cuStreamEndCapture. Note that the graph handle may be invalidated before end of capture for certain errors. Nodes that are or become unreachable from the original stream at cuStreamEndCapture due to direct actions on the graph do not trigger CUDA_ERROR_STREAM_CAPTURE_UNJOINED.

  • dependencies_out (List[CUgraphNode]) – Optional location to store a pointer to an array of nodes. The next node to be captured in the stream will depend on this set of nodes, absent operations such as event wait which modify this set. The array pointer is valid until the next API call which operates on the stream or until the capture is terminated. The node handles may be copied out and are valid until they or the graph is destroyed. The driver-owned array may also be passed directly to APIs that operate on the graph (not the stream) without copying.

  • edgeData_out (List[CUgraphEdgeData]) – Optional location to store a pointer to an array of graph edge data. This array parallels dependencies_out; the next node to be added has an edge to dependencies_out`[i] with annotation `edgeData_out`[i] for each `i. The array pointer is valid until the next API call which operates on the stream or until the capture is terminated.

  • numDependencies_out (int) – Optional location to store the size of the array returned in dependencies_out.

cuda.bindings.driver.cuStreamUpdateCaptureDependencies(hStream, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, unsigned int flags)

Update the set of dependencies in a capturing stream (11.3+)

Modifies the dependency set of a capturing stream. The dependency set is the set of nodes that the next captured node in the stream will depend on.

Valid flags are CU_STREAM_ADD_CAPTURE_DEPENDENCIES and CU_STREAM_SET_CAPTURE_DEPENDENCIES. These control whether the set passed to the API is added to the existing set or replaces it. A flags value of 0 defaults to CU_STREAM_ADD_CAPTURE_DEPENDENCIES.

Nodes that are removed from the dependency set via this API do not result in CUDA_ERROR_STREAM_CAPTURE_UNJOINED if they are unreachable from the stream at cuStreamEndCapture.

Returns CUDA_ERROR_ILLEGAL_STATE if the stream is not capturing.

This API is new in CUDA 11.3. Developers requiring compatibility across minor versions to CUDA 11.0 should not use this API or provide a fallback.

Parameters:
  • hStream (CUstream or cudaStream_t) – The stream to update

  • dependencies (List[CUgraphNode]) – The set of dependencies to add

  • numDependencies (size_t) – The size of the dependencies array

  • flags (unsigned int) – See above

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_ILLEGAL_STATE

Return type:

CUresult

cuda.bindings.driver.cuStreamUpdateCaptureDependencies_v2(hStream, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], dependencyData: Optional[Tuple[CUgraphEdgeData] | List[CUgraphEdgeData]], size_t numDependencies, unsigned int flags)

Update the set of dependencies in a capturing stream (12.3+)

Modifies the dependency set of a capturing stream. The dependency set is the set of nodes that the next captured node in the stream will depend on along with the edge data for those dependencies.

Valid flags are CU_STREAM_ADD_CAPTURE_DEPENDENCIES and CU_STREAM_SET_CAPTURE_DEPENDENCIES. These control whether the set passed to the API is added to the existing set or replaces it. A flags value of 0 defaults to CU_STREAM_ADD_CAPTURE_DEPENDENCIES.

Nodes that are removed from the dependency set via this API do not result in CUDA_ERROR_STREAM_CAPTURE_UNJOINED if they are unreachable from the stream at cuStreamEndCapture.

Returns CUDA_ERROR_ILLEGAL_STATE if the stream is not capturing.

Parameters:
  • hStream (CUstream or cudaStream_t) – The stream to update

  • dependencies (List[CUgraphNode]) – The set of dependencies to add

  • dependencyData (List[CUgraphEdgeData]) – Optional array of data associated with each dependency.

  • numDependencies (size_t) – The size of the dependencies array

  • flags (unsigned int) – See above

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_ILLEGAL_STATE

Return type:

CUresult

cuda.bindings.driver.cuStreamAttachMemAsync(hStream, dptr, size_t length, unsigned int flags)

Attach memory to a stream asynchronously.

Enqueues an operation in hStream to specify stream association of length bytes of memory starting from dptr. This function is a stream-ordered operation, meaning that it is dependent on, and will only take effect when, previous work in stream has completed. Any previous association is automatically replaced.

dptr must point to one of the following types of memories:

  • managed memory declared using the managed keyword or allocated with cuMemAllocManaged.

  • a valid host-accessible region of system-allocated pageable memory. This type of memory may only be specified if the device associated with the stream reports a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_PAGEABLE_MEMORY_ACCESS.

For managed allocations, length must be either zero or the entire allocation’s size. Both indicate that the entire allocation’s stream association is being changed. Currently, it is not possible to change stream association for a portion of a managed allocation.

For pageable host allocations, length must be non-zero.

The stream association is specified using flags which must be one of CUmemAttach_flags. If the CU_MEM_ATTACH_GLOBAL flag is specified, the memory can be accessed by any stream on any device. If the CU_MEM_ATTACH_HOST flag is specified, the program makes a guarantee that it won’t access the memory on the device from any stream on a device that has a zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. If the CU_MEM_ATTACH_SINGLE flag is specified and hStream is associated with a device that has a zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS, the program makes a guarantee that it will only access the memory on the device from hStream. It is illegal to attach singly to the NULL stream, because the NULL stream is a virtual global stream and not a specific stream. An error will be returned in this case.

When memory is associated with a single stream, the Unified Memory system will allow CPU access to this memory region so long as all operations in hStream have completed, regardless of whether other streams are active. In effect, this constrains exclusive ownership of the managed memory region by an active GPU to per-stream activity instead of whole-GPU activity.

Accessing memory on the device from streams that are not associated with it will produce undefined results. No error checking is performed by the Unified Memory system to ensure that kernels launched into other streams do not access this region.

It is a program’s responsibility to order calls to cuStreamAttachMemAsync via events, synchronization or other means to ensure legal access to memory at all times. Data visibility and coherency will be changed appropriately for all kernels which follow a stream-association change.

If hStream is destroyed while data is associated with it, the association is removed and the association reverts to the default visibility of the allocation as specified at cuMemAllocManaged. For managed variables, the default association is always CU_MEM_ATTACH_GLOBAL. Note that destroying a stream is an asynchronous operation, and as a result, the change to default association won’t happen until all work in the stream has completed.

Parameters:
  • hStream (CUstream or cudaStream_t) – Stream in which to enqueue the attach operation

  • dptr (CUdeviceptr) – Pointer to memory (must be a pointer to managed memory or to a valid host-accessible region of system-allocated pageable memory)

  • length (size_t) – Length of memory

  • flags (unsigned int) – Must be one of CUmemAttach_flags

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuStreamQuery(hStream)

Determine status of a compute stream.

Returns CUDA_SUCCESS if all operations in the stream specified by hStream have completed, or CUDA_ERROR_NOT_READY if not.

For the purposes of Unified Memory, a return value of CUDA_SUCCESS is equivalent to having called cuStreamSynchronize().

Parameters:

hStream (CUstream or cudaStream_t) – Stream to query status of

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_NOT_READY

Return type:

CUresult

cuda.bindings.driver.cuStreamSynchronize(hStream)

Wait until a stream’s tasks are completed.

Waits until the device has completed all operations in the stream specified by hStream. If the context was created with the CU_CTX_SCHED_BLOCKING_SYNC flag, the CPU thread will block until the stream is finished with all of its tasks.

ote_null_stream

cuda.bindings.driver.cuStreamDestroy(hStream)

Destroys a stream.

Destroys the stream specified by hStream.

In case the device is still doing work in the stream hStream when cuStreamDestroy() is called, the function will return immediately and the resources associated with hStream will be released automatically once the device has completed all work in hStream.

Parameters:

hStream (CUstream or cudaStream_t) – Stream to destroy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuStreamCopyAttributes(dst, src)

Copies attributes from source stream to destination stream.

Copies attributes from source stream src to destination stream dst. Both streams must have the same context.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuStreamGetAttribute(hStream, attr: CUstreamAttrID)

Queries stream attribute.

Queries attribute attr from hStream and stores it in corresponding member of value_out.

Parameters:
Returns:

cuda.bindings.driver.cuStreamSetAttribute(hStream, attr: CUstreamAttrID, CUstreamAttrValue value: Optional[CUstreamAttrValue])

Sets stream attribute.

Sets attribute attr on hStream from corresponding attribute of value. The updated attribute will be applied to subsequent work submitted to the stream. It will not affect previously submitted work.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

Event Management

This section describes the event management functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuEventCreate(unsigned int Flags)

Creates an event.

Creates an event *phEvent for the current context with the flags specified via Flags. Valid flags include:

Parameters:

Flags (unsigned int) – Event creation flags

Returns:

cuda.bindings.driver.cuEventRecord(hEvent, hStream)

Records an event.

Captures in hEvent the contents of hStream at the time of this call. hEvent and hStream must be from the same context otherwise CUDA_ERROR_INVALID_HANDLE is returned. Calls such as cuEventQuery() or cuStreamWaitEvent() will then examine or wait for completion of the work that was captured. Uses of hStream after this call do not modify hEvent. See note on default stream behavior for what is captured in the default case.

cuEventRecord() can be called multiple times on the same event and will overwrite the previously captured state. Other APIs such as cuStreamWaitEvent() use the most recently captured state at the time of the API call, and are not affected by later calls to cuEventRecord(). Before the first call to cuEventRecord(), an event represents an empty set of work, so for example cuEventQuery() would return CUDA_SUCCESS.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuEventRecordWithFlags(hEvent, hStream, unsigned int flags)

Records an event.

Captures in hEvent the contents of hStream at the time of this call. hEvent and hStream must be from the same context otherwise CUDA_ERROR_INVALID_HANDLE is returned. Calls such as cuEventQuery() or cuStreamWaitEvent() will then examine or wait for completion of the work that was captured. Uses of hStream after this call do not modify hEvent. See note on default stream behavior for what is captured in the default case.

cuEventRecordWithFlags() can be called multiple times on the same event and will overwrite the previously captured state. Other APIs such as cuStreamWaitEvent() use the most recently captured state at the time of the API call, and are not affected by later calls to cuEventRecordWithFlags(). Before the first call to cuEventRecordWithFlags(), an event represents an empty set of work, so for example cuEventQuery() would return CUDA_SUCCESS.

flags include:

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuEventQuery(hEvent)

Queries an event’s status.

Queries the status of all work currently captured by hEvent. See cuEventRecord() for details on what is captured by an event.

Returns CUDA_SUCCESS if all captured work has been completed, or CUDA_ERROR_NOT_READY if any captured work is incomplete.

For the purposes of Unified Memory, a return value of CUDA_SUCCESS is equivalent to having called cuEventSynchronize().

Parameters:

hEvent (CUevent or cudaEvent_t) – Event to query

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_READY

Return type:

CUresult

cuda.bindings.driver.cuEventSynchronize(hEvent)

Waits for an event to complete.

Waits until the completion of all work currently captured in hEvent. See cuEventRecord() for details on what is captured by an event.

Waiting for an event that was created with the CU_EVENT_BLOCKING_SYNC flag will cause the calling CPU thread to block until the event has been completed by the device. If the CU_EVENT_BLOCKING_SYNC flag has not been set, then the CPU thread will busy-wait until the event has been completed by the device.

Parameters:

hEvent (CUevent or cudaEvent_t) – Event to wait for

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuEventDestroy(hEvent)

Destroys an event.

Destroys the event specified by hEvent.

An event may be destroyed before it is complete (i.e., while cuEventQuery() would return CUDA_ERROR_NOT_READY). In this case, the call does not block on completion of the event, and any associated resources will automatically be released asynchronously at completion.

Parameters:

hEvent (CUevent or cudaEvent_t) – Event to destroy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuEventElapsedTime(hStart, hEnd)

Computes the elapsed time between two events.

Computes the elapsed time between two events (in milliseconds with a resolution of around 0.5 microseconds).

If either event was last recorded in a non-NULL stream, the resulting time may be greater than expected (even if both used the same stream handle). This happens because the cuEventRecord() operation takes place asynchronously and there is no guarantee that the measured latency is actually just between the two events. Any number of other different stream operations could execute in between the two measured events, thus altering the timing in a significant way.

If cuEventRecord() has not been called on either event then CUDA_ERROR_INVALID_HANDLE is returned. If cuEventRecord() has been called on both events but one or both of them has not yet been completed (that is, cuEventQuery() would return CUDA_ERROR_NOT_READY on at least one of the events), CUDA_ERROR_NOT_READY is returned. If either event was created with the CU_EVENT_DISABLE_TIMING flag, then this function will return CUDA_ERROR_INVALID_HANDLE.

Parameters:
Returns:

External Resource Interoperability

This section describes the external resource interoperability functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuImportExternalMemory(CUDA_EXTERNAL_MEMORY_HANDLE_DESC memHandleDesc: Optional[CUDA_EXTERNAL_MEMORY_HANDLE_DESC])

Imports an external memory object.

Imports an externally allocated memory object and returns a handle to that in extMem_out.

The properties of the handle being imported must be described in memHandleDesc. The CUDA_EXTERNAL_MEMORY_HANDLE_DESC structure is defined as follows:

View CUDA Toolkit Documentation for a C++ code example

where type specifies the type of handle being imported. CUexternalMemoryHandleType is defined as:

View CUDA Toolkit Documentation for a C++ code example

If type is CU_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD, then CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::fd must be a valid file descriptor referencing a memory object. Ownership of the file descriptor is transferred to the CUDA driver when the handle is imported successfully. Performing any operations on the file descriptor after it is imported results in undefined behavior.

If type is CU_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32, then exactly one of CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::handle and CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::name must not be NULL. If CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::handle is not NULL, then it must represent a valid shared NT handle that references a memory object. Ownership of this handle is not transferred to CUDA after the import operation, so the application must release the handle using the appropriate system call. If CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::name is not NULL, then it must point to a NULL-terminated array of UTF-16 characters that refers to a memory object.

If type is CU_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_KMT, then CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::handle must be non-NULL and CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::name must be NULL. The handle specified must be a globally shared KMT handle. This handle does not hold a reference to the underlying object, and thus will be invalid when all references to the memory object are destroyed.

If type is CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_HEAP, then exactly one of CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::handle and CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::name must not be NULL. If CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::handle is not NULL, then it must represent a valid shared NT handle that is returned by ID3D12Device::CreateSharedHandle when referring to a ID3D12Heap object. This handle holds a reference to the underlying object. If CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::name is not NULL, then it must point to a NULL-terminated array of UTF-16 characters that refers to a ID3D12Heap object.

If type is CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_RESOURCE, then exactly one of CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::handle and CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::name must not be NULL. If CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::handle is not NULL, then it must represent a valid shared NT handle that is returned by ID3D12Device::CreateSharedHandle when referring to a ID3D12Resource object. This handle holds a reference to the underlying object. If CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::name is not NULL, then it must point to a NULL-terminated array of UTF-16 characters that refers to a ID3D12Resource object.

If type is CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_RESOURCE, then CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::handle must represent a valid shared NT handle that is returned by IDXGIResource1::CreateSharedHandle when referring to a ID3D11Resource object. If CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::name is not NULL, then it must point to a NULL-terminated array of UTF-16 characters that refers to a ID3D11Resource object.

If type is CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_RESOURCE_KMT, then CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::handle must represent a valid shared KMT handle that is returned by IDXGIResource::GetSharedHandle when referring to a ID3D11Resource object and CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::win32::name must be NULL.

If type is CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF, then CUDA_EXTERNAL_MEMORY_HANDLE_DESC::handle::nvSciBufObject must be non-NULL and reference a valid NvSciBuf object. If the NvSciBuf object imported into CUDA is also mapped by other drivers, then the application must use cuWaitExternalSemaphoresAsync or cuSignalExternalSemaphoresAsync as appropriate barriers to maintain coherence between CUDA and the other drivers. See CUDA_EXTERNAL_SEMAPHORE_SIGNAL_SKIP_NVSCIBUF_MEMSYNC and CUDA_EXTERNAL_SEMAPHORE_WAIT_SKIP_NVSCIBUF_MEMSYNC for memory synchronization.

The size of the memory object must be specified in size.

Specifying the flag CUDA_EXTERNAL_MEMORY_DEDICATED in flags indicates that the resource is a dedicated resource. The definition of what a dedicated resource is outside the scope of this extension. This flag must be set if type is one of the following: CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_RESOURCE CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_RESOURCE CU_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_RESOURCE_KMT

Parameters:

memHandleDesc (CUDA_EXTERNAL_MEMORY_HANDLE_DESC) – Memory import handle descriptor

Returns:

Notes

If the Vulkan memory imported into CUDA is mapped on the CPU then the application must use vkInvalidateMappedMemoryRanges/vkFlushMappedMemoryRanges as well as appropriate Vulkan pipeline barriers to maintain coherence between CPU and GPU. For more information on these APIs, please refer to “Synchronization and Cache Control” chapter from Vulkan specification.

cuda.bindings.driver.cuExternalMemoryGetMappedBuffer(extMem, CUDA_EXTERNAL_MEMORY_BUFFER_DESC bufferDesc: Optional[CUDA_EXTERNAL_MEMORY_BUFFER_DESC])

Maps a buffer onto an imported memory object.

Maps a buffer onto an imported memory object and returns a device pointer in devPtr.

The properties of the buffer being mapped must be described in bufferDesc. The CUDA_EXTERNAL_MEMORY_BUFFER_DESC structure is defined as follows:

View CUDA Toolkit Documentation for a C++ code example

where offset is the offset in the memory object where the buffer’s base address is. size is the size of the buffer. flags must be zero.

The offset and size have to be suitably aligned to match the requirements of the external API. Mapping two buffers whose ranges overlap may or may not result in the same virtual address being returned for the overlapped portion. In such cases, the application must ensure that all accesses to that region from the GPU are volatile. Otherwise writes made via one address are not guaranteed to be visible via the other address, even if they’re issued by the same thread. It is recommended that applications map the combined range instead of mapping separate buffers and then apply the appropriate offsets to the returned pointer to derive the individual buffers.

The returned pointer devPtr must be freed using cuMemFree.

Parameters:
Returns:

cuda.bindings.driver.cuExternalMemoryGetMappedMipmappedArray(extMem, CUDA_EXTERNAL_MEMORY_MIPMAPPED_ARRAY_DESC mipmapDesc: Optional[CUDA_EXTERNAL_MEMORY_MIPMAPPED_ARRAY_DESC])

Maps a CUDA mipmapped array onto an external memory object.

Maps a CUDA mipmapped array onto an external object and returns a handle to it in mipmap.

The properties of the CUDA mipmapped array being mapped must be described in mipmapDesc. The structure CUDA_EXTERNAL_MEMORY_MIPMAPPED_ARRAY_DESC is defined as follows:

View CUDA Toolkit Documentation for a C++ code example

where offset is the offset in the memory object where the base level of the mipmap chain is. arrayDesc describes the format, dimensions and type of the base level of the mipmap chain. For further details on these parameters, please refer to the documentation for cuMipmappedArrayCreate. Note that if the mipmapped array is bound as a color target in the graphics API, then the flag CUDA_ARRAY3D_COLOR_ATTACHMENT must be specified in CUDA_EXTERNAL_MEMORY_MIPMAPPED_ARRAY_DESC::arrayDesc::Flags. numLevels specifies the total number of levels in the mipmap chain.

If extMem was imported from a handle of type CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF, then numLevels must be equal to 1.

The returned CUDA mipmapped array must be freed using cuMipmappedArrayDestroy.

Parameters:
Returns:

cuda.bindings.driver.cuDestroyExternalMemory(extMem)

Destroys an external memory object.

Destroys the specified external memory object. Any existing buffers and CUDA mipmapped arrays mapped onto this object must no longer be used and must be explicitly freed using cuMemFree and cuMipmappedArrayDestroy respectively.

Parameters:

extMem (CUexternalMemory) – External memory object to be destroyed

Returns:

CUDA_SUCCESS, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuImportExternalSemaphore(CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC semHandleDesc: Optional[CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC])

Imports an external semaphore.

Imports an externally allocated synchronization object and returns a handle to that in extSem_out.

The properties of the handle being imported must be described in semHandleDesc. The CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC is defined as follows:

View CUDA Toolkit Documentation for a C++ code example

where type specifies the type of handle being imported. CUexternalSemaphoreHandleType is defined as:

View CUDA Toolkit Documentation for a C++ code example

If type is CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_FD, then CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::fd must be a valid file descriptor referencing a synchronization object. Ownership of the file descriptor is transferred to the CUDA driver when the handle is imported successfully. Performing any operations on the file descriptor after it is imported results in undefined behavior.

If type is CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32, then exactly one of CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::handle and CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::name must not be NULL. If CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::handle is not NULL, then it must represent a valid shared NT handle that references a synchronization object. Ownership of this handle is not transferred to CUDA after the import operation, so the application must release the handle using the appropriate system call. If CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::name is not NULL, then it must name a valid synchronization object.

If type is CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32_KMT, then CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::handle must be non-NULL and CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::name must be NULL. The handle specified must be a globally shared KMT handle. This handle does not hold a reference to the underlying object, and thus will be invalid when all references to the synchronization object are destroyed.

If type is CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D12_FENCE, then exactly one of CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::handle and CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::name must not be NULL. If CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::handle is not NULL, then it must represent a valid shared NT handle that is returned by ID3D12Device::CreateSharedHandle when referring to a ID3D12Fence object. This handle holds a reference to the underlying object. If CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::name is not NULL, then it must name a valid synchronization object that refers to a valid ID3D12Fence object.

If type is CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_FENCE, then CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::handle represents a valid shared NT handle that is returned by ID3D11Fence::CreateSharedHandle. If CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::name is not NULL, then it must name a valid synchronization object that refers to a valid ID3D11Fence object.

If type is CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC, then CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::nvSciSyncObj represents a valid NvSciSyncObj.

CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_KEYED_MUTEX, then CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::handle represents a valid shared NT handle that is returned by IDXGIResource1::CreateSharedHandle when referring to a IDXGIKeyedMutex object. If CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::name is not NULL, then it must name a valid synchronization object that refers to a valid IDXGIKeyedMutex object.

If type is CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_KEYED_MUTEX_KMT, then CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::handle represents a valid shared KMT handle that is returned by IDXGIResource::GetSharedHandle when referring to a IDXGIKeyedMutex object and CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::name must be NULL.

If type is CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_TIMELINE_SEMAPHORE_FD, then CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::fd must be a valid file descriptor referencing a synchronization object. Ownership of the file descriptor is transferred to the CUDA driver when the handle is imported successfully. Performing any operations on the file descriptor after it is imported results in undefined behavior.

If type is CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_TIMELINE_SEMAPHORE_WIN32, then exactly one of CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::handle and CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::name must not be NULL. If CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::handle is not NULL, then it must represent a valid shared NT handle that references a synchronization object. Ownership of this handle is not transferred to CUDA after the import operation, so the application must release the handle using the appropriate system call. If CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC::handle::win32::name is not NULL, then it must name a valid synchronization object.

Parameters:

semHandleDesc (CUDA_EXTERNAL_SEMAPHORE_HANDLE_DESC) – Semaphore import handle descriptor

Returns:

cuda.bindings.driver.cuSignalExternalSemaphoresAsync(extSemArray: Optional[Tuple[CUexternalSemaphore] | List[CUexternalSemaphore]], paramsArray: Optional[Tuple[CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS] | List[CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS]], unsigned int numExtSems, stream)

Signals a set of external semaphore objects.

Enqueues a signal operation on a set of externally allocated semaphore object in the specified stream. The operations will be executed when all prior operations in the stream complete.

The exact semantics of signaling a semaphore depends on the type of the object.

If the semaphore object is any one of the following types: CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_FD, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32_KMT then signaling the semaphore will set it to the signaled state.

If the semaphore object is any one of the following types: CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D12_FENCE, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_FENCE, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_TIMELINE_SEMAPHORE_FD, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_TIMELINE_SEMAPHORE_WIN32 then the semaphore will be set to the value specified in CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS::params::fence::value.

If the semaphore object is of the type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC this API sets CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS::params::nvSciSync::fence to a value that can be used by subsequent waiters of the same NvSciSync object to order operations with those currently submitted in stream. Such an update will overwrite previous contents of CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS::params::nvSciSync::fence. By default, signaling such an external semaphore object causes appropriate memory synchronization operations to be performed over all external memory objects that are imported as CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF. This ensures that any subsequent accesses made by other importers of the same set of NvSciBuf memory object(s) are coherent. These operations can be skipped by specifying the flag CUDA_EXTERNAL_SEMAPHORE_SIGNAL_SKIP_NVSCIBUF_MEMSYNC, which can be used as a performance optimization when data coherency is not required. But specifying this flag in scenarios where data coherency is required results in undefined behavior. Also, for semaphore object of the type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC, if the NvSciSyncAttrList used to create the NvSciSyncObj had not set the flags in cuDeviceGetNvSciSyncAttributes to CUDA_NVSCISYNC_ATTR_SIGNAL, this API will return CUDA_ERROR_NOT_SUPPORTED. NvSciSyncFence associated with semaphore object of the type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC can be deterministic. For this the NvSciSyncAttrList used to create the semaphore object must have value of NvSciSyncAttrKey_RequireDeterministicFences key set to true. Deterministic fences allow users to enqueue a wait over the semaphore object even before corresponding signal is enqueued. For such a semaphore object, CUDA guarantees that each signal operation will increment the fence value by ‘1’. Users are expected to track count of signals enqueued on the semaphore object and insert waits accordingly. When such a semaphore object is signaled from multiple streams, due to concurrent stream execution, it is possible that the order in which the semaphore gets signaled is indeterministic. This could lead to waiters of the semaphore getting unblocked incorrectly. Users are expected to handle such situations, either by not using the same semaphore object with deterministic fence support enabled in different streams or by adding explicit dependency amongst such streams so that the semaphore is signaled in order.

If the semaphore object is any one of the following types: CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_KEYED_MUTEX, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_KEYED_MUTEX_KMT then the keyed mutex will be released with the key specified in CUDA_EXTERNAL_SEMAPHORE_PARAMS::params::keyedmutex::key.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuWaitExternalSemaphoresAsync(extSemArray: Optional[Tuple[CUexternalSemaphore] | List[CUexternalSemaphore]], paramsArray: Optional[Tuple[CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS] | List[CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS]], unsigned int numExtSems, stream)

Waits on a set of external semaphore objects.

Enqueues a wait operation on a set of externally allocated semaphore object in the specified stream. The operations will be executed when all prior operations in the stream complete.

The exact semantics of waiting on a semaphore depends on the type of the object.

If the semaphore object is any one of the following types: CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_FD, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32_KMT then waiting on the semaphore will wait until the semaphore reaches the signaled state. The semaphore will then be reset to the unsignaled state. Therefore for every signal operation, there can only be one wait operation.

If the semaphore object is any one of the following types: CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D12_FENCE, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_FENCE, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_TIMELINE_SEMAPHORE_FD, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_TIMELINE_SEMAPHORE_WIN32 then waiting on the semaphore will wait until the value of the semaphore is greater than or equal to CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS::params::fence::value.

If the semaphore object is of the type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC then, waiting on the semaphore will wait until the CUDA_EXTERNAL_SEMAPHORE_SIGNAL_PARAMS::params::nvSciSync::fence is signaled by the signaler of the NvSciSyncObj that was associated with this semaphore object. By default, waiting on such an external semaphore object causes appropriate memory synchronization operations to be performed over all external memory objects that are imported as CU_EXTERNAL_MEMORY_HANDLE_TYPE_NVSCIBUF. This ensures that any subsequent accesses made by other importers of the same set of NvSciBuf memory object(s) are coherent. These operations can be skipped by specifying the flag CUDA_EXTERNAL_SEMAPHORE_WAIT_SKIP_NVSCIBUF_MEMSYNC, which can be used as a performance optimization when data coherency is not required. But specifying this flag in scenarios where data coherency is required results in undefined behavior. Also, for semaphore object of the type CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_NVSCISYNC, if the NvSciSyncAttrList used to create the NvSciSyncObj had not set the flags in cuDeviceGetNvSciSyncAttributes to CUDA_NVSCISYNC_ATTR_WAIT, this API will return CUDA_ERROR_NOT_SUPPORTED.

If the semaphore object is any one of the following types: CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_KEYED_MUTEX, CU_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D11_KEYED_MUTEX_KMT then the keyed mutex will be acquired when it is released with the key specified in CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS::params::keyedmutex::key or until the timeout specified by CUDA_EXTERNAL_SEMAPHORE_WAIT_PARAMS::params::keyedmutex::timeoutMs has lapsed. The timeout interval can either be a finite value specified in milliseconds or an infinite value. In case an infinite value is specified the timeout never elapses. The windows INFINITE macro must be used to specify infinite timeout.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_NOT_SUPPORTED, CUDA_ERROR_TIMEOUT

Return type:

CUresult

cuda.bindings.driver.cuDestroyExternalSemaphore(extSem)

Destroys an external semaphore.

Destroys an external semaphore object and releases any references to the underlying resource. Any outstanding signals or waits must have completed before the semaphore is destroyed.

Parameters:

extSem (CUexternalSemaphore) – External semaphore to be destroyed

Returns:

CUDA_SUCCESS, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

Stream Memory Operations

This section describes the stream memory operations of the low-level CUDA driver application programming interface.

Support for the CU_STREAM_WAIT_VALUE_NOR flag can be queried with ::CU_DEVICE_ATTRIBUTE_CAN_USE_STREAM_WAIT_VALUE_NOR_V2.

Support for the cuStreamWriteValue64() and cuStreamWaitValue64() functions, as well as for the CU_STREAM_MEM_OP_WAIT_VALUE_64 and CU_STREAM_MEM_OP_WRITE_VALUE_64 flags, can be queried with CU_DEVICE_ATTRIBUTE_CAN_USE_64_BIT_STREAM_MEM_OPS.

Support for both CU_STREAM_WAIT_VALUE_FLUSH and CU_STREAM_MEM_OP_FLUSH_REMOTE_WRITES requires dedicated platform hardware features and can be queried with cuDeviceGetAttribute() and CU_DEVICE_ATTRIBUTE_CAN_FLUSH_REMOTE_WRITES.

Note that all memory pointers passed as parameters to these operations are device pointers. Where necessary a device pointer should be obtained, for example with cuMemHostGetDevicePointer().

None of the operations accepts pointers to managed memory buffers (cuMemAllocManaged).

Warning: Improper use of these APIs may deadlock the application. Synchronization ordering established through these APIs is not visible to CUDA. CUDA tasks that are (even indirectly) ordered by these APIs should also have that order expressed with CUDA-visible dependencies such as events. This ensures that the scheduler does not serialize them in an improper order.

cuda.bindings.driver.cuStreamWaitValue32(stream, addr, value, unsigned int flags)

Wait on a memory location.

Enqueues a synchronization of the stream on the given memory location. Work ordered after the operation will block until the given condition on the memory is satisfied. By default, the condition is to wait for (int32_t)(*addr - value) >= 0, a cyclic greater-or-equal. Other condition types can be specified via flags.

If the memory was registered via cuMemHostRegister(), the device pointer should be obtained with cuMemHostGetDevicePointer(). This function cannot be used with managed memory (cuMemAllocManaged).

Support for CU_STREAM_WAIT_VALUE_NOR can be queried with cuDeviceGetAttribute() and CU_DEVICE_ATTRIBUTE_CAN_USE_STREAM_WAIT_VALUE_NOR_V2.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

Notes

Warning: Improper use of this API may deadlock the application. Synchronization ordering established through this API is not visible to CUDA. CUDA tasks that are (even indirectly) ordered by this API should also have that order expressed with CUDA-visible dependencies such as events. This ensures that the scheduler does not serialize them in an improper order.

cuda.bindings.driver.cuStreamWaitValue64(stream, addr, value, unsigned int flags)

Wait on a memory location.

Enqueues a synchronization of the stream on the given memory location. Work ordered after the operation will block until the given condition on the memory is satisfied. By default, the condition is to wait for (int64_t)(*addr - value) >= 0, a cyclic greater-or-equal. Other condition types can be specified via flags.

If the memory was registered via cuMemHostRegister(), the device pointer should be obtained with cuMemHostGetDevicePointer().

Support for this can be queried with cuDeviceGetAttribute() and CU_DEVICE_ATTRIBUTE_CAN_USE_64_BIT_STREAM_MEM_OPS.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

Notes

Warning: Improper use of this API may deadlock the application. Synchronization ordering established through this API is not visible to CUDA. CUDA tasks that are (even indirectly) ordered by this API should also have that order expressed with CUDA-visible dependencies such as events. This ensures that the scheduler does not serialize them in an improper order.

cuda.bindings.driver.cuStreamWriteValue32(stream, addr, value, unsigned int flags)

Write a value to memory.

Write a value to memory.

If the memory was registered via cuMemHostRegister(), the device pointer should be obtained with cuMemHostGetDevicePointer(). This function cannot be used with managed memory (cuMemAllocManaged).

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuStreamWriteValue64(stream, addr, value, unsigned int flags)

Write a value to memory.

Write a value to memory.

If the memory was registered via cuMemHostRegister(), the device pointer should be obtained with cuMemHostGetDevicePointer().

Support for this can be queried with cuDeviceGetAttribute() and CU_DEVICE_ATTRIBUTE_CAN_USE_64_BIT_STREAM_MEM_OPS.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuStreamBatchMemOp(stream, unsigned int count, paramArray: Optional[Tuple[CUstreamBatchMemOpParams] | List[CUstreamBatchMemOpParams]], unsigned int flags)

Batch operations to synchronize the stream via memory operations.

This is a batch version of cuStreamWaitValue32() and cuStreamWriteValue32(). Batching operations may avoid some performance overhead in both the API call and the device execution versus adding them to the stream in separate API calls. The operations are enqueued in the order they appear in the array.

See CUstreamBatchMemOpType for the full set of supported operations, and cuStreamWaitValue32(), cuStreamWaitValue64(), cuStreamWriteValue32(), and cuStreamWriteValue64() for details of specific operations.

See related APIs for details on querying support for specific operations.

Parameters:
  • stream (CUstream or cudaStream_t) – The stream to enqueue the operations in.

  • count (unsigned int) – The number of operations in the array. Must be less than 256.

  • paramArray (List[CUstreamBatchMemOpParams]) – The types and parameters of the individual operations.

  • flags (unsigned int) – Reserved for future expansion; must be 0.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

Notes

Warning: Improper use of this API may deadlock the application. Synchronization ordering established through this API is not visible to CUDA. CUDA tasks that are (even indirectly) ordered by this API should also have that order expressed with CUDA-visible dependencies such as events. This ensures that the scheduler does not serialize them in an improper order. For more information, see the Stream Memory Operations section in the programming guide(https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html).

Execution Control

This section describes the execution control functions of the low-level CUDA driver application programming interface.

class cuda.bindings.driver.CUfunctionLoadingState(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
CU_FUNCTION_LOADING_STATE_UNLOADED = 0
CU_FUNCTION_LOADING_STATE_LOADED = 1
CU_FUNCTION_LOADING_STATE_MAX = 2
cuda.bindings.driver.cuFuncGetAttribute(attrib: CUfunction_attribute, hfunc)

Returns information about a function.

Returns in *pi the integer value of the attribute attrib on the kernel given by hfunc. The supported attributes are:

  • CU_FUNC_ATTRIBUTE_MAX_THREADS_PER_BLOCK: The maximum number of threads per block, beyond which a launch of the function would fail. This number depends on both the function and the device on which the function is currently loaded.

  • CU_FUNC_ATTRIBUTE_SHARED_SIZE_BYTES: The size in bytes of statically-allocated shared memory per block required by this function. This does not include dynamically-allocated shared memory requested by the user at runtime.

  • CU_FUNC_ATTRIBUTE_CONST_SIZE_BYTES: The size in bytes of user-allocated constant memory required by this function.

  • CU_FUNC_ATTRIBUTE_LOCAL_SIZE_BYTES: The size in bytes of local memory used by each thread of this function.

  • CU_FUNC_ATTRIBUTE_NUM_REGS: The number of registers used by each thread of this function.

  • CU_FUNC_ATTRIBUTE_PTX_VERSION: The PTX virtual architecture version for which the function was compiled. This value is the major PTX version * 10

    • the minor PTX version, so a PTX version 1.3 function would return the value 13. Note that this may return the undefined value of 0 for cubins compiled prior to CUDA 3.0.

  • CU_FUNC_ATTRIBUTE_BINARY_VERSION: The binary architecture version for which the function was compiled. This value is the major binary version * 10 + the minor binary version, so a binary version 1.3 function would return the value 13. Note that this will return a value of 10 for legacy cubins that do not have a properly-encoded binary architecture version.

  • CU_FUNC_CACHE_MODE_CA: The attribute to indicate whether the function has been compiled with user specified option “-Xptxas –dlcm=ca” set .

  • CU_FUNC_ATTRIBUTE_MAX_DYNAMIC_SHARED_SIZE_BYTES: The maximum size in bytes of dynamically-allocated shared memory.

  • CU_FUNC_ATTRIBUTE_PREFERRED_SHARED_MEMORY_CARVEOUT: Preferred shared memory-L1 cache split ratio in percent of total shared memory.

  • CU_FUNC_ATTRIBUTE_CLUSTER_SIZE_MUST_BE_SET: If this attribute is set, the kernel must launch with a valid cluster size specified.

  • CU_FUNC_ATTRIBUTE_REQUIRED_CLUSTER_WIDTH: The required cluster width in blocks.

  • CU_FUNC_ATTRIBUTE_REQUIRED_CLUSTER_HEIGHT: The required cluster height in blocks.

  • CU_FUNC_ATTRIBUTE_REQUIRED_CLUSTER_DEPTH: The required cluster depth in blocks.

  • CU_FUNC_ATTRIBUTE_NON_PORTABLE_CLUSTER_SIZE_ALLOWED: Indicates whether the function can be launched with non-portable cluster size. 1 is allowed, 0 is disallowed. A non-portable cluster size may only function on the specific SKUs the program is tested on. The launch might fail if the program is run on a different hardware platform. CUDA API provides cudaOccupancyMaxActiveClusters to assist with checking whether the desired size can be launched on the current device. A portable cluster size is guaranteed to be functional on all compute capabilities higher than the target compute capability. The portable cluster size for sm_90 is 8 blocks per cluster. This value may increase for future compute capabilities. The specific hardware unit may support higher cluster sizes that’s not guaranteed to be portable.

  • CU_FUNC_ATTRIBUTE_CLUSTER_SCHEDULING_POLICY_PREFERENCE: The block scheduling policy of a function. The value type is CUclusterSchedulingPolicy.

With a few execeptions, function attributes may also be queried on unloaded function handles returned from cuModuleEnumerateFunctions. CUDA_ERROR_FUNCTION_NOT_LOADED is returned if the attribute requires a fully loaded function but the function is not loaded. The loading state of a function may be queried using cuFuncIsloaded. cuFuncLoad may be called to explicitly load a function before querying the following attributes that require the function to be loaded:

Parameters:
Returns:

cuda.bindings.driver.cuFuncSetAttribute(hfunc, attrib: CUfunction_attribute, int value)

Sets information about a function.

This call sets the value of a specified attribute attrib on the kernel given by hfunc to an integer value specified by val This function returns CUDA_SUCCESS if the new value of the attribute could be successfully set. If the set fails, this call will return an error. Not all attributes can have values set. Attempting to set a value on a read-only attribute will result in an error (CUDA_ERROR_INVALID_VALUE)

Supported attributes for the cuFuncSetAttribute call are:

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuFuncSetCacheConfig(hfunc, config: CUfunc_cache)

Sets the preferred cache configuration for a device function.

On devices where the L1 cache and shared memory use the same hardware resources, this sets through config the preferred cache configuration for the device function hfunc. This is only a preference. The driver will use the requested configuration if possible, but it is free to choose a different configuration if required to execute hfunc. Any context-wide preference set via cuCtxSetCacheConfig() will be overridden by this per-function setting unless the per-function setting is CU_FUNC_CACHE_PREFER_NONE. In that case, the current context-wide setting will be used.

This setting does nothing on devices where the size of the L1 cache and shared memory are fixed.

Launching a kernel with a different preference than the most recent preference setting may insert a device-side synchronization point.

The supported cache configurations are:

Parameters:
  • hfunc (CUfunction) – Kernel to configure cache for

  • config (CUfunc_cache) – Requested cache configuration

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT

Return type:

CUresult

cuda.bindings.driver.cuFuncGetModule(hfunc)

Returns a module handle.

Returns in *hmod the handle of the module that function hfunc is located in. The lifetime of the module corresponds to the lifetime of the context it was loaded in or until the module is explicitly unloaded.

The CUDA runtime manages its own modules loaded into the primary context. If the handle returned by this API refers to a module loaded by the CUDA runtime, calling cuModuleUnload() on that module will result in undefined behavior.

Parameters:

hfunc (CUfunction) – Function to retrieve module for

Returns:

cuda.bindings.driver.cuFuncGetName(hfunc)

Returns the function name for a CUfunction handle.

Returns in **name the function name associated with the function handle hfunc . The function name is returned as a null-terminated string. The returned name is only valid when the function handle is valid. If the module is unloaded or reloaded, one must call the API again to get the updated name. This API may return a mangled name if the function is not declared as having C linkage. If either **name or hfunc is NULL, CUDA_ERROR_INVALID_VALUE is returned.

Parameters:

hfunc (CUfunction) – The function handle to retrieve the name for

Returns:

cuda.bindings.driver.cuFuncGetParamInfo(func, size_t paramIndex)

Returns the offset and size of a kernel parameter in the device-side parameter layout.

Queries the kernel parameter at paramIndex into func’s list of parameters, and returns in paramOffset and paramSize the offset and size, respectively, where the parameter will reside in the device-side parameter layout. This information can be used to update kernel node parameters from the device via cudaGraphKernelNodeSetParam() and cudaGraphKernelNodeUpdatesApply(). paramIndex must be less than the number of parameters that func takes. paramSize can be set to NULL if only the parameter offset is desired.

Parameters:
  • func (CUfunction) – The function to query

  • paramIndex (size_t) – The parameter index to query

Returns:

  • CUresultCUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

  • paramOffset (int) – Returns the offset into the device-side parameter layout at which the parameter resides

  • paramSize (int) – Optionally returns the size of the parameter in the device-side parameter layout

cuda.bindings.driver.cuFuncIsLoaded(function)

Returns if the function is loaded.

Returns in state the loading state of function.

Parameters:

function (CUfunction) – the function to check

Returns:

cuda.bindings.driver.cuFuncLoad(function)

Loads a function.

Finalizes function loading for function. Calling this API with a fully loaded function has no effect.

Parameters:

function (CUfunction) – the function to load

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuLaunchKernel(f, unsigned int gridDimX, unsigned int gridDimY, unsigned int gridDimZ, unsigned int blockDimX, unsigned int blockDimY, unsigned int blockDimZ, unsigned int sharedMemBytes, hStream, kernelParams, void_ptr extra)

Launches a CUDA function CUfunction or a CUDA kernel CUkernel.

Invokes the function CUfunction or the kernel CUkernel f on a gridDimX x gridDimY x gridDimZ grid of blocks. Each block contains blockDimX x blockDimY x blockDimZ threads.

sharedMemBytes sets the amount of dynamic shared memory that will be available to each thread block.

Kernel parameters to f can be specified in one of two ways:

1) Kernel parameters can be specified via kernelParams. If f has N parameters, then kernelParams needs to be an array of N pointers. Each of `kernelParams`[0] through `kernelParams`[N-1] must point to a region of memory from which the actual kernel parameter will be copied. The number of kernel parameters and their offsets and sizes do not need to be specified as that information is retrieved directly from the kernel’s image.

2) Kernel parameters can also be packaged by the application into a single buffer that is passed in via the extra parameter. This places the burden on the application of knowing each kernel parameter’s size and alignment/padding within the buffer. Here is an example of using the extra parameter in this manner:

View CUDA Toolkit Documentation for a C++ code example

The extra parameter exists to allow cuLaunchKernel to take additional less commonly used arguments. extra specifies a list of names of extra settings and their corresponding values. Each extra setting name is immediately followed by the corresponding value. The list must be terminated with either NULL or CU_LAUNCH_PARAM_END.

The error CUDA_ERROR_INVALID_VALUE will be returned if kernel parameters are specified with both kernelParams and extra (i.e. both kernelParams and extra are non-NULL).

Calling cuLaunchKernel() invalidates the persistent function state set through the following deprecated APIs: cuFuncSetBlockShape(), cuFuncSetSharedSize(), cuParamSetSize(), cuParamSeti(), cuParamSetf(), cuParamSetv().

Note that to use cuLaunchKernel(), the kernel f must either have been compiled with toolchain version 3.2 or later so that it will contain kernel parameter information, or have no kernel parameters. If either of these conditions is not met, then cuLaunchKernel() will return CUDA_ERROR_INVALID_IMAGE.

Note that the API can also be used to launch context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then passing it to the API by casting to CUfunction. Here, the context to launch the kernel on will either be taken from the specified stream hStream or the current context in case of NULL stream.

Parameters:
  • f (CUfunction) – Function CUfunction or Kernel CUkernel to launch

  • gridDimX (unsigned int) – Width of grid in blocks

  • gridDimY (unsigned int) – Height of grid in blocks

  • gridDimZ (unsigned int) – Depth of grid in blocks

  • blockDimX (unsigned int) – X dimension of each thread block

  • blockDimY (unsigned int) – Y dimension of each thread block

  • blockDimZ (unsigned int) – Z dimension of each thread block

  • sharedMemBytes (unsigned int) – Dynamic shared-memory size per thread block in bytes

  • hStream (CUstream or cudaStream_t) – Stream identifier

  • kernelParams (Any) – Array of pointers to kernel parameters

  • extra (List[Any]) – Extra options

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_IMAGE, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_LAUNCH_FAILED, CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES, CUDA_ERROR_LAUNCH_TIMEOUT, CUDA_ERROR_LAUNCH_INCOMPATIBLE_TEXTURING, CUDA_ERROR_SHARED_OBJECT_INIT_FAILED, CUDA_ERROR_NOT_FOUND

Return type:

CUresult

cuda.bindings.driver.cuLaunchKernelEx(CUlaunchConfig config: Optional[CUlaunchConfig], f, kernelParams, void_ptr extra)

Launches a CUDA function CUfunction or a CUDA kernel CUkernel with launch-time configuration.

Invokes the function CUfunction or the kernel CUkernel f with the specified launch-time configuration config.

The CUlaunchConfig structure is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • gridDimX is the width of the grid in blocks.

  • gridDimY is the height of the grid in blocks.

  • gridDimZ is the depth of the grid in blocks.

  • blockDimX is the X dimension of each thread block.

  • blockDimX is the Y dimension of each thread block.

  • blockDimZ is the Z dimension of each thread block.

  • sharedMemBytes is the dynamic shared- memory size per thread block in bytes.

  • hStream is the handle to the stream to perform the launch in. The CUDA context associated with this stream must match that associated with function f.

  • attrs is an array of numAttrs continguous CUlaunchAttribute elements. The value of this pointer is not considered if numAttrs is zero. However, in that case, it is recommended to set the pointer to NULL.

  • numAttrs is the number of attributes populating the first numAttrs positions of the attrs array.

Launch-time configuration is specified by adding entries to attrs. Each entry is an attribute ID and a corresponding attribute value.

The CUlaunchAttribute structure is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • id is a unique enum identifying the attribute.

  • value is a union that hold the attribute value.

An example of using the config parameter:

View CUDA Toolkit Documentation for a C++ code example

The CUlaunchAttributeID enum is defined as:

View CUDA Toolkit Documentation for a C++ code example

and the corresponding CUlaunchAttributeValue union as :

View CUDA Toolkit Documentation for a C++ code example

Setting CU_LAUNCH_ATTRIBUTE_COOPERATIVE to a non-zero value causes the kernel launch to be a cooperative launch, with exactly the same usage and semantics of cuLaunchCooperativeKernel.

Setting CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_STREAM_SERIALIZATION to a non-zero values causes the kernel to use programmatic means to resolve its stream dependency – enabling the CUDA runtime to opportunistically allow the grid’s execution to overlap with the previous kernel in the stream, if that kernel requests the overlap.

CU_LAUNCH_ATTRIBUTE_PROGRAMMATIC_EVENT records an event along with the kernel launch. Event recorded through this launch attribute is guaranteed to only trigger after all block in the associated kernel trigger the event. A block can trigger the event through PTX launchdep.release or CUDA builtin function cudaTriggerProgrammaticLaunchCompletion(). A trigger can also be inserted at the beginning of each block’s execution if triggerAtBlockStart is set to non-0. Note that dependents (including the CPU thread calling cuEventSynchronize()) are not guaranteed to observe the release precisely when it is released. For example, cuEventSynchronize() may only observe the event trigger long after the associated kernel has completed. This recording type is primarily meant for establishing programmatic dependency between device tasks. The event supplied must not be an interprocess or interop event. The event must disable timing (i.e. created with CU_EVENT_DISABLE_TIMING flag set).

CU_LAUNCH_ATTRIBUTE_LAUNCH_COMPLETION_EVENT records an event along with the kernel launch. Nominally, the event is triggered once all blocks of the kernel have begun execution. Currently this is a best effort. If a kernel B has a launch completion dependency on a kernel A, B may wait until A is complete. Alternatively, blocks of B may begin before all blocks of A have begun, for example:

  • If B can claim execution resources unavaiable to A, for example if they run on different GPUs.

  • If B is a higher priority than A.

Exercise caution if such an ordering inversion could lead to deadlock. The event supplied must not be an interprocess or interop event. The event must disable timing (i.e. must be created with the CU_EVENT_DISABLE_TIMING flag set).

Setting CU_LAUNCH_ATTRIBUTE_DEVICE_UPDATABLE_KERNEL_NODE to 1 on a captured launch causes the resulting kernel node to be device- updatable. This attribute is specific to graphs, and passing it to a launch in a non-capturing stream results in an error. Passing a value other than 0 or 1 is not allowed.

On success, a handle will be returned via CUlaunchAttributeValue::deviceUpdatableKernelNode::devNode which can be passed to the various device-side update functions to update the node’s kernel parameters from within another kernel. For more information on the types of device updates that can be made, as well as the relevant limitations thereof, see cudaGraphKernelNodeUpdatesApply.

Kernel nodes which are device-updatable have additional restrictions compared to regular kernel nodes. Firstly, device-updatable nodes cannot be removed from their graph via cuGraphDestroyNode. Additionally, once opted-in to this functionality, a node cannot opt out, and any attempt to set the attribute to 0 will result in an error. Graphs containing one or more device-updatable node also do not allow multiple instantiation.

The effect of other attributes is consistent with their effect when set via persistent APIs.

See cuStreamSetAttribute for

See cuFuncSetAttribute for

Kernel parameters to f can be specified in the same ways that they can be using cuLaunchKernel.

Note that the API can also be used to launch context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then passing it to the API by casting to CUfunction. Here, the context to launch the kernel on will either be taken from the specified stream hStream or the current context in case of NULL stream.

Parameters:
  • config (CUlaunchConfig) – Config to launch

  • f (CUfunction) – Function CUfunction or Kernel CUkernel to launch

  • kernelParams (Any) – Array of pointers to kernel parameters

  • extra (List[Any]) – Extra options

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_IMAGE, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_LAUNCH_FAILED, CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES, CUDA_ERROR_LAUNCH_TIMEOUT, CUDA_ERROR_LAUNCH_INCOMPATIBLE_TEXTURING, CUDA_ERROR_COOPERATIVE_LAUNCH_TOO_LARGE, CUDA_ERROR_SHARED_OBJECT_INIT_FAILED, CUDA_ERROR_NOT_FOUND

Return type:

CUresult

cuda.bindings.driver.cuLaunchCooperativeKernel(f, unsigned int gridDimX, unsigned int gridDimY, unsigned int gridDimZ, unsigned int blockDimX, unsigned int blockDimY, unsigned int blockDimZ, unsigned int sharedMemBytes, hStream, kernelParams)

Launches a CUDA function CUfunction or a CUDA kernel CUkernel where thread blocks can cooperate and synchronize as they execute.

Invokes the function CUfunction or the kernel CUkernel f on a gridDimX x gridDimY x gridDimZ grid of blocks. Each block contains blockDimX x blockDimY x blockDimZ threads.

sharedMemBytes sets the amount of dynamic shared memory that will be available to each thread block.

The device on which this kernel is invoked must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_COOPERATIVE_LAUNCH.

The total number of blocks launched cannot exceed the maximum number of blocks per multiprocessor as returned by cuOccupancyMaxActiveBlocksPerMultiprocessor (or cuOccupancyMaxActiveBlocksPerMultiprocessorWithFlags) times the number of multiprocessors as specified by the device attribute CU_DEVICE_ATTRIBUTE_MULTIPROCESSOR_COUNT.

The kernel cannot make use of CUDA dynamic parallelism.

Kernel parameters must be specified via kernelParams. If f has N parameters, then kernelParams needs to be an array of N pointers. Each of `kernelParams`[0] through `kernelParams`[N-1] must point to a region of memory from which the actual kernel parameter will be copied. The number of kernel parameters and their offsets and sizes do not need to be specified as that information is retrieved directly from the kernel’s image.

Calling cuLaunchCooperativeKernel() sets persistent function state that is the same as function state set through cuLaunchKernel API

When the kernel f is launched via cuLaunchCooperativeKernel(), the previous block shape, shared size and parameter info associated with f is overwritten.

Note that to use cuLaunchCooperativeKernel(), the kernel f must either have been compiled with toolchain version 3.2 or later so that it will contain kernel parameter information, or have no kernel parameters. If either of these conditions is not met, then cuLaunchCooperativeKernel() will return CUDA_ERROR_INVALID_IMAGE.

Note that the API can also be used to launch context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then passing it to the API by casting to CUfunction. Here, the context to launch the kernel on will either be taken from the specified stream hStream or the current context in case of NULL stream.

Parameters:
  • f (CUfunction) – Function CUfunction or Kernel CUkernel to launch

  • gridDimX (unsigned int) – Width of grid in blocks

  • gridDimY (unsigned int) – Height of grid in blocks

  • gridDimZ (unsigned int) – Depth of grid in blocks

  • blockDimX (unsigned int) – X dimension of each thread block

  • blockDimY (unsigned int) – Y dimension of each thread block

  • blockDimZ (unsigned int) – Z dimension of each thread block

  • sharedMemBytes (unsigned int) – Dynamic shared-memory size per thread block in bytes

  • hStream (CUstream or cudaStream_t) – Stream identifier

  • kernelParams (Any) – Array of pointers to kernel parameters

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_IMAGE, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_LAUNCH_FAILED, CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES, CUDA_ERROR_LAUNCH_TIMEOUT, CUDA_ERROR_LAUNCH_INCOMPATIBLE_TEXTURING, CUDA_ERROR_COOPERATIVE_LAUNCH_TOO_LARGE, CUDA_ERROR_SHARED_OBJECT_INIT_FAILED, CUDA_ERROR_NOT_FOUND

Return type:

CUresult

cuda.bindings.driver.cuLaunchCooperativeKernelMultiDevice(launchParamsList: Optional[Tuple[CUDA_LAUNCH_PARAMS] | List[CUDA_LAUNCH_PARAMS]], unsigned int numDevices, unsigned int flags)

Launches CUDA functions on multiple devices where thread blocks can cooperate and synchronize as they execute.

[Deprecated]

Invokes kernels as specified in the launchParamsList array where each element of the array specifies all the parameters required to perform a single kernel launch. These kernels can cooperate and synchronize as they execute. The size of the array is specified by numDevices.

No two kernels can be launched on the same device. All the devices targeted by this multi-device launch must be identical. All devices must have a non-zero value for the device attribute CU_DEVICE_ATTRIBUTE_COOPERATIVE_MULTI_DEVICE_LAUNCH.

All kernels launched must be identical with respect to the compiled code. Note that any device, constant or managed variables present in the module that owns the kernel launched on each device, are independently instantiated on every device. It is the application’s responsibility to ensure these variables are initialized and used appropriately.

The size of the grids as specified in blocks, the size of the blocks themselves and the amount of shared memory used by each thread block must also match across all launched kernels.

The streams used to launch these kernels must have been created via either cuStreamCreate or cuStreamCreateWithPriority. The NULL stream or CU_STREAM_LEGACY or CU_STREAM_PER_THREAD cannot be used.

The total number of blocks launched per kernel cannot exceed the maximum number of blocks per multiprocessor as returned by cuOccupancyMaxActiveBlocksPerMultiprocessor (or cuOccupancyMaxActiveBlocksPerMultiprocessorWithFlags) times the number of multiprocessors as specified by the device attribute CU_DEVICE_ATTRIBUTE_MULTIPROCESSOR_COUNT. Since the total number of blocks launched per device has to match across all devices, the maximum number of blocks that can be launched per device will be limited by the device with the least number of multiprocessors.

The kernels cannot make use of CUDA dynamic parallelism.

The CUDA_LAUNCH_PARAMS structure is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • function specifies the kernel to be launched. All functions must be identical with respect to the compiled code. Note that you can also specify context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then casting to CUfunction. In this case, the context to launch the kernel on be taken from the specified stream hStream.

  • gridDimX is the width of the grid in blocks. This must match across all kernels launched.

  • gridDimY is the height of the grid in blocks. This must match across all kernels launched.

  • gridDimZ is the depth of the grid in blocks. This must match across all kernels launched.

  • blockDimX is the X dimension of each thread block. This must match across all kernels launched.

  • blockDimX is the Y dimension of each thread block. This must match across all kernels launched.

  • blockDimZ is the Z dimension of each thread block. This must match across all kernels launched.

  • sharedMemBytes is the dynamic shared- memory size per thread block in bytes. This must match across all kernels launched.

  • hStream is the handle to the stream to perform the launch in. This cannot be the NULL stream or CU_STREAM_LEGACY or CU_STREAM_PER_THREAD. The CUDA context associated with this stream must match that associated with function.

  • kernelParams is an array of pointers to kernel parameters. If function has N parameters, then kernelParams needs to be an array of N pointers. Each of :py:obj:`~.CUDA_LAUNCH_PARAMS.kernelParams`[0] through :py:obj:`~.CUDA_LAUNCH_PARAMS.kernelParams`[N-1] must point to a region of memory from which the actual kernel parameter will be copied. The number of kernel parameters and their offsets and sizes do not need to be specified as that information is retrieved directly from the kernel’s image.

By default, the kernel won’t begin execution on any GPU until all prior work in all the specified streams has completed. This behavior can be overridden by specifying the flag CUDA_COOPERATIVE_LAUNCH_MULTI_DEVICE_NO_PRE_LAUNCH_SYNC. When this flag is specified, each kernel will only wait for prior work in the stream corresponding to that GPU to complete before it begins execution.

Similarly, by default, any subsequent work pushed in any of the specified streams will not begin execution until the kernels on all GPUs have completed. This behavior can be overridden by specifying the flag CUDA_COOPERATIVE_LAUNCH_MULTI_DEVICE_NO_POST_LAUNCH_SYNC. When this flag is specified, any subsequent work pushed in any of the specified streams will only wait for the kernel launched on the GPU corresponding to that stream to complete before it begins execution.

Calling cuLaunchCooperativeKernelMultiDevice() sets persistent function state that is the same as function state set through cuLaunchKernel API when called individually for each element in launchParamsList.

When kernels are launched via cuLaunchCooperativeKernelMultiDevice(), the previous block shape, shared size and parameter info associated with each function in launchParamsList is overwritten.

Note that to use cuLaunchCooperativeKernelMultiDevice(), the kernels must either have been compiled with toolchain version 3.2 or later so that it will contain kernel parameter information, or have no kernel parameters. If either of these conditions is not met, then cuLaunchCooperativeKernelMultiDevice() will return CUDA_ERROR_INVALID_IMAGE.

Parameters:
  • launchParamsList (List[CUDA_LAUNCH_PARAMS]) – List of launch parameters, one per device

  • numDevices (unsigned int) – Size of the launchParamsList array

  • flags (unsigned int) – Flags to control launch behavior

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_IMAGE, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_LAUNCH_FAILED, CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES, CUDA_ERROR_LAUNCH_TIMEOUT, CUDA_ERROR_LAUNCH_INCOMPATIBLE_TEXTURING, CUDA_ERROR_COOPERATIVE_LAUNCH_TOO_LARGE, CUDA_ERROR_SHARED_OBJECT_INIT_FAILED

Return type:

CUresult

cuda.bindings.driver.cuLaunchHostFunc(hStream, fn, userData)

Enqueues a host function call in a stream.

Enqueues a host function to run in a stream. The function will be called after currently enqueued work and will block work added after it.

The host function must not make any CUDA API calls. Attempting to use a CUDA API may result in CUDA_ERROR_NOT_PERMITTED, but this is not required. The host function must not perform any synchronization that may depend on outstanding CUDA work not mandated to run earlier. Host functions without a mandated order (such as in independent streams) execute in undefined order and may be serialized.

For the purposes of Unified Memory, execution makes a number of guarantees:

  • The stream is considered idle for the duration of the function’s execution. Thus, for example, the function may always use memory attached to the stream it was enqueued in.

  • The start of execution of the function has the same effect as synchronizing an event recorded in the same stream immediately prior to the function. It thus synchronizes streams which have been “joined” prior to the function.

  • Adding device work to any stream does not have the effect of making the stream active until all preceding host functions and stream callbacks have executed. Thus, for example, a function might use global attached memory even if work has been added to another stream, if the work has been ordered behind the function call with an event.

  • Completion of the function does not cause a stream to become active except as described above. The stream will remain idle if no device work follows the function, and will remain idle across consecutive host functions or stream callbacks without device work in between. Thus, for example, stream synchronization can be done by signaling from a host function at the end of the stream.

Note that, in contrast to cuStreamAddCallback, the function will not be called in the event of an error in the CUDA context.

Parameters:
  • hStream (CUstream or cudaStream_t) – Stream to enqueue function call in

  • fn (CUhostFn) – The function to call once preceding stream operations are complete

  • userData (Any) – User-specified data to be passed to the function

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

Graph Management

This section describes the graph management functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuGraphCreate(unsigned int flags)

Creates a graph.

Creates an empty graph, which is returned via phGraph.

Parameters:

flags (unsigned int) – Graph creation flags, must be 0

Returns:

cuda.bindings.driver.cuGraphAddKernelNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, CUDA_KERNEL_NODE_PARAMS nodeParams: Optional[CUDA_KERNEL_NODE_PARAMS])

Creates a kernel execution node and adds it to a graph.

Creates a new kernel execution node and adds it to hGraph with numDependencies dependencies specified via dependencies and arguments specified in nodeParams. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

The CUDA_KERNEL_NODE_PARAMS structure is defined as:

View CUDA Toolkit Documentation for a C++ code example

When the graph is launched, the node will invoke kernel func on a (gridDimX x gridDimY x gridDimZ) grid of blocks. Each block contains (blockDimX x blockDimY x blockDimZ) threads.

sharedMemBytes sets the amount of dynamic shared memory that will be available to each thread block.

Kernel parameters to func can be specified in one of two ways:

1) Kernel parameters can be specified via kernelParams. If the kernel has N parameters, then kernelParams needs to be an array of N pointers. Each pointer, from `kernelParams`[0] to `kernelParams`[N-1], points to the region of memory from which the actual parameter will be copied. The number of kernel parameters and their offsets and sizes do not need to be specified as that information is retrieved directly from the kernel’s image.

2) Kernel parameters for non-cooperative kernels can also be packaged by the application into a single buffer that is passed in via extra. This places the burden on the application of knowing each kernel parameter’s size and alignment/padding within the buffer. The extra parameter exists to allow this function to take additional less commonly used arguments. extra specifies a list of names of extra settings and their corresponding values. Each extra setting name is immediately followed by the corresponding value. The list must be terminated with either NULL or CU_LAUNCH_PARAM_END.

The error CUDA_ERROR_INVALID_VALUE will be returned if kernel parameters are specified with both kernelParams and extra (i.e. both kernelParams and extra are non-NULL). CUDA_ERROR_INVALID_VALUE will be returned if extra is used for a cooperative kernel.

The kernelParams or extra array, as well as the argument values it points to, are copied during this call.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • numDependencies (size_t) – Number of dependencies

  • nodeParams (CUDA_KERNEL_NODE_PARAMS) – Parameters for the GPU execution node

Returns:

Notes

Kernels launched using graphs must not use texture and surface references. Reading or writing through any texture or surface reference is undefined behavior. This restriction does not apply to texture and surface objects.

cuda.bindings.driver.cuGraphKernelNodeGetParams(hNode)

Returns a kernel node’s parameters.

Returns the parameters of kernel node hNode in nodeParams. The kernelParams or extra array returned in nodeParams, as well as the argument values it points to, are owned by the node. This memory remains valid until the node is destroyed or its parameters are modified, and should not be modified directly. Use cuGraphKernelNodeSetParams to update the parameters of this node.

The params will contain either kernelParams or extra, according to which of these was most recently set on the node.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the parameters for

Returns:

cuda.bindings.driver.cuGraphKernelNodeSetParams(hNode, CUDA_KERNEL_NODE_PARAMS nodeParams: Optional[CUDA_KERNEL_NODE_PARAMS])

Sets a kernel node’s parameters.

Sets the parameters of kernel node hNode to nodeParams.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_OUT_OF_MEMORY

Return type:

CUresult

cuda.bindings.driver.cuGraphAddMemcpyNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, CUDA_MEMCPY3D copyParams: Optional[CUDA_MEMCPY3D], ctx)

Creates a memcpy node and adds it to a graph.

Creates a new memcpy node and adds it to hGraph with numDependencies dependencies specified via dependencies. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

When the graph is launched, the node will perform the memcpy described by copyParams. See cuMemcpy3D() for a description of the structure and its restrictions.

Memcpy nodes have some additional restrictions with regards to managed memory, if the system contains at least one device which has a zero value for the device attribute CU_DEVICE_ATTRIBUTE_CONCURRENT_MANAGED_ACCESS. If one or more of the operands refer to managed memory, then using the memory type CU_MEMORYTYPE_UNIFIED is disallowed for those operand(s). The managed memory will be treated as residing on either the host or the device, depending on which memory type is specified.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • numDependencies (size_t) – Number of dependencies

  • copyParams (CUDA_MEMCPY3D) – Parameters for the memory copy

  • ctx (CUcontext) – Context on which to run the node

Returns:

cuda.bindings.driver.cuGraphMemcpyNodeGetParams(hNode)

Returns a memcpy node’s parameters.

Returns the parameters of memcpy node hNode in nodeParams.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the parameters for

Returns:

cuda.bindings.driver.cuGraphMemcpyNodeSetParams(hNode, CUDA_MEMCPY3D nodeParams: Optional[CUDA_MEMCPY3D])

Sets a memcpy node’s parameters.

Sets the parameters of memcpy node hNode to nodeParams.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphAddMemsetNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, CUDA_MEMSET_NODE_PARAMS memsetParams: Optional[CUDA_MEMSET_NODE_PARAMS], ctx)

Creates a memset node and adds it to a graph.

Creates a new memset node and adds it to hGraph with numDependencies dependencies specified via dependencies. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

The element size must be 1, 2, or 4 bytes. When the graph is launched, the node will perform the memset described by memsetParams.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • numDependencies (size_t) – Number of dependencies

  • memsetParams (CUDA_MEMSET_NODE_PARAMS) – Parameters for the memory set

  • ctx (CUcontext) – Context on which to run the node

Returns:

cuda.bindings.driver.cuGraphMemsetNodeGetParams(hNode)

Returns a memset node’s parameters.

Returns the parameters of memset node hNode in nodeParams.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the parameters for

Returns:

cuda.bindings.driver.cuGraphMemsetNodeSetParams(hNode, CUDA_MEMSET_NODE_PARAMS nodeParams: Optional[CUDA_MEMSET_NODE_PARAMS])

Sets a memset node’s parameters.

Sets the parameters of memset node hNode to nodeParams.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphAddHostNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, CUDA_HOST_NODE_PARAMS nodeParams: Optional[CUDA_HOST_NODE_PARAMS])

Creates a host execution node and adds it to a graph.

Creates a new CPU execution node and adds it to hGraph with numDependencies dependencies specified via dependencies and arguments specified in nodeParams. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

When the graph is launched, the node will invoke the specified CPU function. Host nodes are not supported under MPS with pre-Volta GPUs.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • numDependencies (size_t) – Number of dependencies

  • nodeParams (CUDA_HOST_NODE_PARAMS) – Parameters for the host node

Returns:

cuda.bindings.driver.cuGraphHostNodeGetParams(hNode)

Returns a host node’s parameters.

Returns the parameters of host node hNode in nodeParams.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the parameters for

Returns:

cuda.bindings.driver.cuGraphHostNodeSetParams(hNode, CUDA_HOST_NODE_PARAMS nodeParams: Optional[CUDA_HOST_NODE_PARAMS])

Sets a host node’s parameters.

Sets the parameters of host node hNode to nodeParams.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphAddChildGraphNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, childGraph)

Creates a child graph node and adds it to a graph.

Creates a new node which executes an embedded graph, and adds it to hGraph with numDependencies dependencies specified via dependencies. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

If hGraph contains allocation or free nodes, this call will return an error.

The node executes an embedded child graph. The child graph is cloned in this call.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • numDependencies (size_t) – Number of dependencies

  • childGraph (CUgraph or cudaGraph_t) – The graph to clone into this node

Returns:

cuda.bindings.driver.cuGraphChildGraphNodeGetGraph(hNode)

Gets a handle to the embedded graph of a child graph node.

Gets a handle to the embedded graph in a child graph node. This call does not clone the graph. Changes to the graph will be reflected in the node, and the node retains ownership of the graph.

Allocation and free nodes cannot be added to the returned graph. Attempting to do so will return an error.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the embedded graph for

Returns:

cuda.bindings.driver.cuGraphAddEmptyNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies)

Creates an empty node and adds it to a graph.

Creates a new node which performs no operation, and adds it to hGraph with numDependencies dependencies specified via dependencies. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

An empty node performs no operation during execution, but can be used for transitive ordering. For example, a phased execution graph with 2 groups of n nodes with a barrier between them can be represented using an empty node and 2*n dependency edges, rather than no empty node and n^2 dependency edges.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • numDependencies (size_t) – Number of dependencies

Returns:

cuda.bindings.driver.cuGraphAddEventRecordNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, event)

Creates an event record node and adds it to a graph.

Creates a new event record node and adds it to hGraph with numDependencies dependencies specified via dependencies and event specified in event. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

Each launch of the graph will record event to capture execution of the node’s dependencies.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • numDependencies (size_t) – Number of dependencies

  • event (CUevent or cudaEvent_t) – Event for the node

Returns:

cuda.bindings.driver.cuGraphEventRecordNodeGetEvent(hNode)

Returns the event associated with an event record node.

Returns the event of event record node hNode in event_out.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the event for

Returns:

cuda.bindings.driver.cuGraphEventRecordNodeSetEvent(hNode, event)

Sets an event record node’s event.

Sets the event of event record node hNode to event.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_OUT_OF_MEMORY

Return type:

CUresult

cuda.bindings.driver.cuGraphAddEventWaitNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, event)

Creates an event wait node and adds it to a graph.

Creates a new event wait node and adds it to hGraph with numDependencies dependencies specified via dependencies and event specified in event. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

The graph node will wait for all work captured in event. See cuEventRecord() for details on what is captured by an event. event may be from a different context or device than the launch stream.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • numDependencies (size_t) – Number of dependencies

  • event (CUevent or cudaEvent_t) – Event for the node

Returns:

cuda.bindings.driver.cuGraphEventWaitNodeGetEvent(hNode)

Returns the event associated with an event wait node.

Returns the event of event wait node hNode in event_out.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the event for

Returns:

cuda.bindings.driver.cuGraphEventWaitNodeSetEvent(hNode, event)

Sets an event wait node’s event.

Sets the event of event wait node hNode to event.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_OUT_OF_MEMORY

Return type:

CUresult

cuda.bindings.driver.cuGraphAddExternalSemaphoresSignalNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, CUDA_EXT_SEM_SIGNAL_NODE_PARAMS nodeParams: Optional[CUDA_EXT_SEM_SIGNAL_NODE_PARAMS])

Creates an external semaphore signal node and adds it to a graph.

Creates a new external semaphore signal node and adds it to hGraph with numDependencies dependencies specified via dependencies and arguments specified in nodeParams. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

Performs a signal operation on a set of externally allocated semaphore objects when the node is launched. The operation(s) will occur after all of the node’s dependencies have completed.

Parameters:
Returns:

cuda.bindings.driver.cuGraphExternalSemaphoresSignalNodeGetParams(hNode)

Returns an external semaphore signal node’s parameters.

Returns the parameters of an external semaphore signal node hNode in params_out. The extSemArray and paramsArray returned in params_out, are owned by the node. This memory remains valid until the node is destroyed or its parameters are modified, and should not be modified directly. Use cuGraphExternalSemaphoresSignalNodeSetParams to update the parameters of this node.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the parameters for

Returns:

cuda.bindings.driver.cuGraphExternalSemaphoresSignalNodeSetParams(hNode, CUDA_EXT_SEM_SIGNAL_NODE_PARAMS nodeParams: Optional[CUDA_EXT_SEM_SIGNAL_NODE_PARAMS])

Sets an external semaphore signal node’s parameters.

Sets the parameters of an external semaphore signal node hNode to nodeParams.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_OUT_OF_MEMORY

Return type:

CUresult

cuda.bindings.driver.cuGraphAddExternalSemaphoresWaitNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, CUDA_EXT_SEM_WAIT_NODE_PARAMS nodeParams: Optional[CUDA_EXT_SEM_WAIT_NODE_PARAMS])

Creates an external semaphore wait node and adds it to a graph.

Creates a new external semaphore wait node and adds it to hGraph with numDependencies dependencies specified via dependencies and arguments specified in nodeParams. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

Performs a wait operation on a set of externally allocated semaphore objects when the node is launched. The node’s dependencies will not be launched until the wait operation has completed.

Parameters:
Returns:

cuda.bindings.driver.cuGraphExternalSemaphoresWaitNodeGetParams(hNode)

Returns an external semaphore wait node’s parameters.

Returns the parameters of an external semaphore wait node hNode in params_out. The extSemArray and paramsArray returned in params_out, are owned by the node. This memory remains valid until the node is destroyed or its parameters are modified, and should not be modified directly. Use cuGraphExternalSemaphoresSignalNodeSetParams to update the parameters of this node.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the parameters for

Returns:

cuda.bindings.driver.cuGraphExternalSemaphoresWaitNodeSetParams(hNode, CUDA_EXT_SEM_WAIT_NODE_PARAMS nodeParams: Optional[CUDA_EXT_SEM_WAIT_NODE_PARAMS])

Sets an external semaphore wait node’s parameters.

Sets the parameters of an external semaphore wait node hNode to nodeParams.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_OUT_OF_MEMORY

Return type:

CUresult

cuda.bindings.driver.cuGraphAddBatchMemOpNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, CUDA_BATCH_MEM_OP_NODE_PARAMS nodeParams: Optional[CUDA_BATCH_MEM_OP_NODE_PARAMS])

Creates a batch memory operation node and adds it to a graph.

Creates a new batch memory operation node and adds it to hGraph with numDependencies dependencies specified via dependencies and arguments specified in nodeParams. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

When the node is added, the paramArray inside nodeParams is copied and therefore it can be freed after the call returns.

Parameters:
Returns:

Notes

Warning: Improper use of this API may deadlock the application. Synchronization ordering established through this API is not visible to CUDA. CUDA tasks that are (even indirectly) ordered by this API should also have that order expressed with CUDA-visible dependencies such as events. This ensures that the scheduler does not serialize them in an improper order. For more information, see the Stream Memory Operations section in the programming guide(https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html).

cuda.bindings.driver.cuGraphBatchMemOpNodeGetParams(hNode)

Returns a batch mem op node’s parameters.

Returns the parameters of batch mem op node hNode in nodeParams_out. The paramArray returned in nodeParams_out is owned by the node. This memory remains valid until the node is destroyed or its parameters are modified, and should not be modified directly. Use cuGraphBatchMemOpNodeSetParams to update the parameters of this node.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the parameters for

Returns:

cuda.bindings.driver.cuGraphBatchMemOpNodeSetParams(hNode, CUDA_BATCH_MEM_OP_NODE_PARAMS nodeParams: Optional[CUDA_BATCH_MEM_OP_NODE_PARAMS])

Sets a batch mem op node’s parameters.

Sets the parameters of batch mem op node hNode to nodeParams.

The paramArray inside nodeParams is copied and therefore it can be freed after the call returns.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_OUT_OF_MEMORY

Return type:

CUresult

cuda.bindings.driver.cuGraphExecBatchMemOpNodeSetParams(hGraphExec, hNode, CUDA_BATCH_MEM_OP_NODE_PARAMS nodeParams: Optional[CUDA_BATCH_MEM_OP_NODE_PARAMS])

Sets the parameters for a batch mem op node in the given graphExec.

Sets the parameters of a batch mem op node in an executable graph hGraphExec. The node is identified by the corresponding node hNode in the non-executable graph, from which the executable graph was instantiated.

The following fields on operations may be modified on an executable graph:

op.waitValue.address op.waitValue.value[64] op.waitValue.flags bits corresponding to wait type (i.e. CU_STREAM_WAIT_VALUE_FLUSH bit cannot be modified) op.writeValue.address op.writeValue.value[64]

Other fields, such as the context, count or type of operations, and other types of operations such as membars, may not be modified.

hNode must not have been removed from the original graph.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

The paramArray inside nodeParams is copied and therefore it can be freed after the call returns.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphAddMemAllocNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, CUDA_MEM_ALLOC_NODE_PARAMS nodeParams: Optional[CUDA_MEM_ALLOC_NODE_PARAMS])

Creates an allocation node and adds it to a graph.

Creates a new allocation node and adds it to hGraph with numDependencies dependencies specified via dependencies and arguments specified in nodeParams. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

When cuGraphAddMemAllocNode creates an allocation node, it returns the address of the allocation in nodeParams.dptr. The allocation’s address remains fixed across instantiations and launches.

If the allocation is freed in the same graph, by creating a free node using cuGraphAddMemFreeNode, the allocation can be accessed by nodes ordered after the allocation node but before the free node. These allocations cannot be freed outside the owning graph, and they can only be freed once in the owning graph.

If the allocation is not freed in the same graph, then it can be accessed not only by nodes in the graph which are ordered after the allocation node, but also by stream operations ordered after the graph’s execution but before the allocation is freed.

Allocations which are not freed in the same graph can be freed by:

It is not possible to free an allocation in both the owning graph and another graph. If the allocation is freed in the same graph, a free node cannot be added to another graph. If the allocation is freed in another graph, a free node can no longer be added to the owning graph.

The following restrictions apply to graphs which contain allocation and/or memory free nodes:

  • Nodes and edges of the graph cannot be deleted.

  • The graph cannot be used in a child node.

  • Only one instantiation of the graph may exist at any point in time.

  • The graph cannot be cloned.

Parameters:
Returns:

cuda.bindings.driver.cuGraphMemAllocNodeGetParams(hNode)

Returns a memory alloc node’s parameters.

Returns the parameters of a memory alloc node hNode in params_out. The poolProps and accessDescs returned in params_out, are owned by the node. This memory remains valid until the node is destroyed. The returned parameters must not be modified.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the parameters for

Returns:

cuda.bindings.driver.cuGraphAddMemFreeNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, dptr)

Creates a memory free node and adds it to a graph.

Creates a new memory free node and adds it to hGraph with numDependencies dependencies specified via dependencies and arguments specified in nodeParams. It is possible for numDependencies to be 0, in which case the node will be placed at the root of the graph. dependencies may not have any duplicate entries. A handle to the new node will be returned in phGraphNode.

cuGraphAddMemFreeNode will return CUDA_ERROR_INVALID_VALUE if the user attempts to free:

  • an allocation twice in the same graph.

  • an address that was not returned by an allocation node.

  • an invalid address.

The following restrictions apply to graphs which contain allocation and/or memory free nodes:

  • Nodes and edges of the graph cannot be deleted.

  • The graph cannot be used in a child node.

  • Only one instantiation of the graph may exist at any point in time.

  • The graph cannot be cloned.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • numDependencies (size_t) – Number of dependencies

  • dptr (CUdeviceptr) – Address of memory to free

Returns:

cuda.bindings.driver.cuGraphMemFreeNodeGetParams(hNode)

Returns a memory free node’s parameters.

Returns the address of a memory free node hNode in dptr_out.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to get the parameters for

Returns:

cuda.bindings.driver.cuDeviceGraphMemTrim(device)

Free unused memory that was cached on the specified device for use with graphs back to the OS.

Blocks which are not in use by a graph that is either currently executing or scheduled to execute are freed back to the operating system.

Parameters:

device (CUdevice) – The device for which cached memory should be freed.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_DEVICE

Return type:

CUresult

cuda.bindings.driver.cuDeviceGetGraphMemAttribute(device, attr: CUgraphMem_attribute)

Query asynchronous allocation attributes related to graphs.

Valid attributes are:

Parameters:
Returns:

cuda.bindings.driver.cuDeviceSetGraphMemAttribute(device, attr: CUgraphMem_attribute, value)

Set asynchronous allocation attributes related to graphs.

Valid attributes are:

Parameters:
  • device (CUdevice) – Specifies the scope of the query

  • attr (CUgraphMem_attribute) – attribute to get

  • value (Any) – pointer to value to set

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_DEVICE

Return type:

CUresult

cuda.bindings.driver.cuGraphClone(originalGraph)

Clones a graph.

This function creates a copy of originalGraph and returns it in phGraphClone. All parameters are copied into the cloned graph. The original graph may be modified after this call without affecting the clone.

Child graph nodes in the original graph are recursively copied into the clone.

Parameters:

originalGraph (CUgraph or cudaGraph_t) – Graph to clone

Returns:

cuda.bindings.driver.cuGraphNodeFindInClone(hOriginalNode, hClonedGraph)

Finds a cloned version of a node.

This function returns the node in hClonedGraph corresponding to hOriginalNode in the original graph.

hClonedGraph must have been cloned from hOriginalGraph via cuGraphClone. hOriginalNode must have been in hOriginalGraph at the time of the call to cuGraphClone, and the corresponding cloned node in hClonedGraph must not have been removed. The cloned node is then returned via phClonedNode.

Parameters:
Returns:

See also

cuGraphClone

cuda.bindings.driver.cuGraphNodeGetType(hNode)

Returns a node’s type.

Returns the node type of hNode in typename.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to query

Returns:

cuda.bindings.driver.cuGraphGetNodes(hGraph, size_t numNodes=0)

Returns a graph’s nodes.

Returns a list of hGraph’s nodes. nodes may be NULL, in which case this function will return the number of nodes in numNodes. Otherwise, numNodes entries will be filled in. If numNodes is higher than the actual number of nodes, the remaining entries in nodes will be set to NULL, and the number of nodes actually obtained will be returned in numNodes.

Parameters:
Returns:

cuda.bindings.driver.cuGraphGetRootNodes(hGraph, size_t numRootNodes=0)

Returns a graph’s root nodes.

Returns a list of hGraph’s root nodes. rootNodes may be NULL, in which case this function will return the number of root nodes in numRootNodes. Otherwise, numRootNodes entries will be filled in. If numRootNodes is higher than the actual number of root nodes, the remaining entries in rootNodes will be set to NULL, and the number of nodes actually obtained will be returned in numRootNodes.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to query

  • numRootNodes (int) – See description

Returns:

cuda.bindings.driver.cuGraphGetEdges(hGraph, size_t numEdges=0)

Returns a graph’s dependency edges.

Returns a list of hGraph’s dependency edges. Edges are returned via corresponding indices in from and to; that is, the node in to`[i] has a dependency on the node in `from`[i]. `from and to may both be NULL, in which case this function only returns the number of edges in numEdges. Otherwise, numEdges entries will be filled in. If numEdges is higher than the actual number of edges, the remaining entries in from and to will be set to NULL, and the number of edges actually returned will be written to numEdges.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to get the edges from

  • numEdges (int) – See description

Returns:

cuda.bindings.driver.cuGraphGetEdges_v2(hGraph, size_t numEdges=0)

Returns a graph’s dependency edges (12.3+)

Returns a list of hGraph’s dependency edges. Edges are returned via corresponding indices in from, to and edgeData; that is, the node in to`[i] has a dependency on the node in `from`[i] with data `edgeData`[i]. `from and to may both be NULL, in which case this function only returns the number of edges in numEdges. Otherwise, numEdges entries will be filled in. If numEdges is higher than the actual number of edges, the remaining entries in from and to will be set to NULL, and the number of edges actually returned will be written to numEdges. edgeData may alone be NULL, in which case the edges must all have default (zeroed) edge data. Attempting a lossy query via NULL edgeData will result in CUDA_ERROR_LOSSY_QUERY. If edgeData is non-NULL then from and to must be as well.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to get the edges from

  • numEdges (int) – See description

Returns:

cuda.bindings.driver.cuGraphNodeGetDependencies(hNode, size_t numDependencies=0)

Returns a node’s dependencies.

Returns a list of node’s dependencies. dependencies may be NULL, in which case this function will return the number of dependencies in numDependencies. Otherwise, numDependencies entries will be filled in. If numDependencies is higher than the actual number of dependencies, the remaining entries in dependencies will be set to NULL, and the number of nodes actually obtained will be returned in numDependencies.

Parameters:
Returns:

cuda.bindings.driver.cuGraphNodeGetDependencies_v2(hNode, size_t numDependencies=0)

Returns a node’s dependencies (12.3+)

Returns a list of node’s dependencies. dependencies may be NULL, in which case this function will return the number of dependencies in numDependencies. Otherwise, numDependencies entries will be filled in. If numDependencies is higher than the actual number of dependencies, the remaining entries in dependencies will be set to NULL, and the number of nodes actually obtained will be returned in numDependencies.

Note that if an edge has non-zero (non-default) edge data and edgeData is NULL, this API will return CUDA_ERROR_LOSSY_QUERY. If edgeData is non-NULL, then dependencies must be as well.

Parameters:
Returns:

cuda.bindings.driver.cuGraphNodeGetDependentNodes(hNode, size_t numDependentNodes=0)

Returns a node’s dependent nodes.

Returns a list of node’s dependent nodes. dependentNodes may be NULL, in which case this function will return the number of dependent nodes in numDependentNodes. Otherwise, numDependentNodes entries will be filled in. If numDependentNodes is higher than the actual number of dependent nodes, the remaining entries in dependentNodes will be set to NULL, and the number of nodes actually obtained will be returned in numDependentNodes.

Parameters:
Returns:

cuda.bindings.driver.cuGraphNodeGetDependentNodes_v2(hNode, size_t numDependentNodes=0)

Returns a node’s dependent nodes (12.3+)

Returns a list of node’s dependent nodes. dependentNodes may be NULL, in which case this function will return the number of dependent nodes in numDependentNodes. Otherwise, numDependentNodes entries will be filled in. If numDependentNodes is higher than the actual number of dependent nodes, the remaining entries in dependentNodes will be set to NULL, and the number of nodes actually obtained will be returned in numDependentNodes.

Note that if an edge has non-zero (non-default) edge data and edgeData is NULL, this API will return CUDA_ERROR_LOSSY_QUERY. If edgeData is non-NULL, then dependentNodes must be as well.

Parameters:
Returns:

cuda.bindings.driver.cuGraphAddDependencies(hGraph, from_: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], to: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies)

Adds dependency edges to a graph.

The number of dependencies to be added is defined by numDependencies Elements in from and to at corresponding indices define a dependency. Each node in from and to must belong to hGraph.

If numDependencies is 0, elements in from and to will be ignored. Specifying an existing dependency will return an error.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which dependencies are added

  • from (List[CUgraphNode]) – Array of nodes that provide the dependencies

  • to (List[CUgraphNode]) – Array of dependent nodes

  • numDependencies (size_t) – Number of dependencies to be added

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphAddDependencies_v2(hGraph, from_: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], to: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], edgeData: Optional[Tuple[CUgraphEdgeData] | List[CUgraphEdgeData]], size_t numDependencies)

Adds dependency edges to a graph (12.3+)

The number of dependencies to be added is defined by numDependencies Elements in from and to at corresponding indices define a dependency. Each node in from and to must belong to hGraph.

If numDependencies is 0, elements in from and to will be ignored. Specifying an existing dependency will return an error.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which dependencies are added

  • from (List[CUgraphNode]) – Array of nodes that provide the dependencies

  • to (List[CUgraphNode]) – Array of dependent nodes

  • edgeData (List[CUgraphEdgeData]) – Optional array of edge data. If NULL, default (zeroed) edge data is assumed.

  • numDependencies (size_t) – Number of dependencies to be added

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphRemoveDependencies(hGraph, from_: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], to: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies)

Removes dependency edges from a graph.

The number of dependencies to be removed is defined by numDependencies. Elements in from and to at corresponding indices define a dependency. Each node in from and to must belong to hGraph.

If numDependencies is 0, elements in from and to will be ignored. Specifying a non-existing dependency will return an error.

Dependencies cannot be removed from graphs which contain allocation or free nodes. Any attempt to do so will return an error.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph from which to remove dependencies

  • from (List[CUgraphNode]) – Array of nodes that provide the dependencies

  • to (List[CUgraphNode]) – Array of dependent nodes

  • numDependencies (size_t) – Number of dependencies to be removed

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphRemoveDependencies_v2(hGraph, from_: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], to: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], edgeData: Optional[Tuple[CUgraphEdgeData] | List[CUgraphEdgeData]], size_t numDependencies)

Removes dependency edges from a graph (12.3+)

The number of dependencies to be removed is defined by numDependencies. Elements in from and to at corresponding indices define a dependency. Each node in from and to must belong to hGraph.

If numDependencies is 0, elements in from and to will be ignored. Specifying an edge that does not exist in the graph, with data matching edgeData, results in an error. edgeData is nullable, which is equivalent to passing default (zeroed) data for each edge.

Dependencies cannot be removed from graphs which contain allocation or free nodes. Any attempt to do so will return an error.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph from which to remove dependencies

  • from (List[CUgraphNode]) – Array of nodes that provide the dependencies

  • to (List[CUgraphNode]) – Array of dependent nodes

  • edgeData (List[CUgraphEdgeData]) – Optional array of edge data. If NULL, edge data is assumed to be default (zeroed).

  • numDependencies (size_t) – Number of dependencies to be removed

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphDestroyNode(hNode)

Remove a node from the graph.

Removes hNode from its graph. This operation also severs any dependencies of other nodes on hNode and vice versa.

Nodes which belong to a graph which contains allocation or free nodes cannot be destroyed. Any attempt to do so will return an error.

Parameters:

hNode (CUgraphNode or cudaGraphNode_t) – Node to remove

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphInstantiate(hGraph, unsigned long long flags)

Creates an executable graph from a graph.

Instantiates hGraph as an executable graph. The graph is validated for any structural constraints or intra-node constraints which were not previously validated. If instantiation is successful, a handle to the instantiated graph is returned in phGraphExec.

The flags parameter controls the behavior of instantiation and subsequent graph launches. Valid flags are:

If hGraph contains any allocation or free nodes, there can be at most one executable graph in existence for that graph at a time. An attempt to instantiate a second executable graph before destroying the first with cuGraphExecDestroy will result in an error. The same also applies if hGraph contains any device-updatable kernel nodes.

If hGraph contains kernels which call device-side cudaGraphLaunch() from multiple contexts, this will result in an error.

Graphs instantiated for launch on the device have additional restrictions which do not apply to host graphs:

  • The graph’s nodes must reside on a single context.

  • The graph can only contain kernel nodes, memcpy nodes, memset nodes, and child graph nodes.

  • The graph cannot be empty and must contain at least one kernel, memcpy, or memset node. Operation-specific restrictions are outlined below.

  • Kernel nodes:

    • Use of CUDA Dynamic Parallelism is not permitted.

    • Cooperative launches are permitted as long as MPS is not in use.

  • Memcpy nodes:

    • Only copies involving device memory and/or pinned device-mapped host memory are permitted.

    • Copies involving CUDA arrays are not permitted.

    • Both operands must be accessible from the current context, and the current context must match the context of other nodes in the graph.

Parameters:
Returns:

cuda.bindings.driver.cuGraphInstantiateWithParams(hGraph, CUDA_GRAPH_INSTANTIATE_PARAMS instantiateParams: Optional[CUDA_GRAPH_INSTANTIATE_PARAMS])

Creates an executable graph from a graph.

Instantiates hGraph as an executable graph according to the instantiateParams structure. The graph is validated for any structural constraints or intra-node constraints which were not previously validated. If instantiation is successful, a handle to the instantiated graph is returned in phGraphExec.

instantiateParams controls the behavior of instantiation and subsequent graph launches, as well as returning more detailed information in the event of an error. CUDA_GRAPH_INSTANTIATE_PARAMS is defined as:

View CUDA Toolkit Documentation for a C++ code example

The flags field controls the behavior of instantiation and subsequent graph launches. Valid flags are:

If hGraph contains any allocation or free nodes, there can be at most one executable graph in existence for that graph at a time. An attempt to instantiate a second executable graph before destroying the first with cuGraphExecDestroy will result in an error. The same also applies if hGraph contains any device-updatable kernel nodes.

If hGraph contains kernels which call device-side cudaGraphLaunch() from multiple contexts, this will result in an error.

Graphs instantiated for launch on the device have additional restrictions which do not apply to host graphs:

  • The graph’s nodes must reside on a single context.

  • The graph can only contain kernel nodes, memcpy nodes, memset nodes, and child graph nodes.

  • The graph cannot be empty and must contain at least one kernel, memcpy, or memset node. Operation-specific restrictions are outlined below.

  • Kernel nodes:

    • Use of CUDA Dynamic Parallelism is not permitted.

    • Cooperative launches are permitted as long as MPS is not in use.

  • Memcpy nodes:

    • Only copies involving device memory and/or pinned device-mapped host memory are permitted.

    • Copies involving CUDA arrays are not permitted.

    • Both operands must be accessible from the current context, and the current context must match the context of other nodes in the graph.

In the event of an error, the result_out and hErrNode_out fields will contain more information about the nature of the error. Possible error reporting includes:

  • CUDA_GRAPH_INSTANTIATE_ERROR, if passed an invalid value or if an unexpected error occurred which is described by the return value of the function. hErrNode_out will be set to NULL.

  • CUDA_GRAPH_INSTANTIATE_INVALID_STRUCTURE, if the graph structure is invalid. hErrNode_out will be set to one of the offending nodes.

  • CUDA_GRAPH_INSTANTIATE_NODE_OPERATION_NOT_SUPPORTED, if the graph is instantiated for device launch but contains a node of an unsupported node type, or a node which performs unsupported operations, such as use of CUDA dynamic parallelism within a kernel node. hErrNode_out will be set to this node.

  • CUDA_GRAPH_INSTANTIATE_MULTIPLE_CTXS_NOT_SUPPORTED, if the graph is instantiated for device launch but a node’s context differs from that of another node. This error can also be returned if a graph is not instantiated for device launch and it contains kernels which call device-side cudaGraphLaunch() from multiple contexts. hErrNode_out will be set to this node.

If instantiation is successful, result_out will be set to CUDA_GRAPH_INSTANTIATE_SUCCESS, and hErrNode_out will be set to NULL.

Parameters:
Returns:

cuda.bindings.driver.cuGraphExecGetFlags(hGraphExec)

Query the instantiation flags of an executable graph.

Returns the flags that were passed to instantiation for the given executable graph. CUDA_GRAPH_INSTANTIATE_FLAG_UPLOAD will not be returned by this API as it does not affect the resulting executable graph.

Parameters:

hGraphExec (CUgraphExec or cudaGraphExec_t) – The executable graph to query

Returns:

cuda.bindings.driver.cuGraphExecKernelNodeSetParams(hGraphExec, hNode, CUDA_KERNEL_NODE_PARAMS nodeParams: Optional[CUDA_KERNEL_NODE_PARAMS])

Sets the parameters for a kernel node in the given graphExec.

Sets the parameters of a kernel node in an executable graph hGraphExec. The node is identified by the corresponding node hNode in the non-executable graph, from which the executable graph was instantiated.

hNode must not have been removed from the original graph. All nodeParams fields may change, but the following restrictions apply to func updates:

  • The owning context of the function cannot change.

  • A node whose function originally did not use CUDA dynamic parallelism cannot be updated to a function which uses CDP

  • A node whose function originally did not make device-side update calls cannot be updated to a function which makes device-side update calls.

  • If hGraphExec was not instantiated for device launch, a node whose function originally did not use device-side cudaGraphLaunch() cannot be updated to a function which uses device-side cudaGraphLaunch() unless the node resides on the same context as nodes which contained such calls at instantiate-time. If no such calls were present at instantiation, these updates cannot be performed at all.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

If hNode is a device-updatable kernel node, the next upload/launch of hGraphExec will overwrite any previous device-side updates. Additionally, applying host updates to a device-updatable kernel node while it is being updated from the device will result in undefined behavior.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphExecMemcpyNodeSetParams(hGraphExec, hNode, CUDA_MEMCPY3D copyParams: Optional[CUDA_MEMCPY3D], ctx)

Sets the parameters for a memcpy node in the given graphExec.

Updates the work represented by hNode in hGraphExec as though hNode had contained copyParams at instantiation. hNode must remain in the graph which was used to instantiate hGraphExec. Changed edges to and from hNode are ignored.

The source and destination memory in copyParams must be allocated from the same contexts as the original source and destination memory. Both the instantiation-time memory operands and the memory operands in copyParams must be 1-dimensional. Zero-length operations are not supported.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

Returns CUDA_ERROR_INVALID_VALUE if the memory operands’ mappings changed or either the original or new memory operands are multidimensional.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphExecMemsetNodeSetParams(hGraphExec, hNode, CUDA_MEMSET_NODE_PARAMS memsetParams: Optional[CUDA_MEMSET_NODE_PARAMS], ctx)

Sets the parameters for a memset node in the given graphExec.

Updates the work represented by hNode in hGraphExec as though hNode had contained memsetParams at instantiation. hNode must remain in the graph which was used to instantiate hGraphExec. Changed edges to and from hNode are ignored.

Zero sized operations are not supported.

The new destination pointer in memsetParams must be to the same kind of allocation as the original destination pointer and have the same context association and device mapping as the original destination pointer.

Both the value and pointer address may be updated. Changing other aspects of the memset (width, height, element size or pitch) may cause the update to be rejected. Specifically, for 2d memsets, all dimension changes are rejected. For 1d memsets, changes in height are explicitly rejected and other changes are oportunistically allowed if the resulting work maps onto the work resources already allocated for the node.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphExecHostNodeSetParams(hGraphExec, hNode, CUDA_HOST_NODE_PARAMS nodeParams: Optional[CUDA_HOST_NODE_PARAMS])

Sets the parameters for a host node in the given graphExec.

Updates the work represented by hNode in hGraphExec as though hNode had contained nodeParams at instantiation. hNode must remain in the graph which was used to instantiate hGraphExec. Changed edges to and from hNode are ignored.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphExecChildGraphNodeSetParams(hGraphExec, hNode, childGraph)

Updates node parameters in the child graph node in the given graphExec.

Updates the work represented by hNode in hGraphExec as though the nodes contained in hNode’s graph had the parameters contained in childGraph’s nodes at instantiation. hNode must remain in the graph which was used to instantiate hGraphExec. Changed edges to and from hNode are ignored.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

The topology of childGraph, as well as the node insertion order, must match that of the graph contained in hNode. See cuGraphExecUpdate() for a list of restrictions on what can be updated in an instantiated graph. The update is recursive, so child graph nodes contained within the top level child graph will also be updated.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphExecEventRecordNodeSetEvent(hGraphExec, hNode, event)

Sets the event for an event record node in the given graphExec.

Sets the event of an event record node in an executable graph hGraphExec. The node is identified by the corresponding node hNode in the non-executable graph, from which the executable graph was instantiated.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphExecEventWaitNodeSetEvent(hGraphExec, hNode, event)

Sets the event for an event wait node in the given graphExec.

Sets the event of an event wait node in an executable graph hGraphExec. The node is identified by the corresponding node hNode in the non-executable graph, from which the executable graph was instantiated.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphExecExternalSemaphoresSignalNodeSetParams(hGraphExec, hNode, CUDA_EXT_SEM_SIGNAL_NODE_PARAMS nodeParams: Optional[CUDA_EXT_SEM_SIGNAL_NODE_PARAMS])

Sets the parameters for an external semaphore signal node in the given graphExec.

Sets the parameters of an external semaphore signal node in an executable graph hGraphExec. The node is identified by the corresponding node hNode in the non-executable graph, from which the executable graph was instantiated.

hNode must not have been removed from the original graph.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

Changing nodeParams->numExtSems is not supported.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphExecExternalSemaphoresWaitNodeSetParams(hGraphExec, hNode, CUDA_EXT_SEM_WAIT_NODE_PARAMS nodeParams: Optional[CUDA_EXT_SEM_WAIT_NODE_PARAMS])

Sets the parameters for an external semaphore wait node in the given graphExec.

Sets the parameters of an external semaphore wait node in an executable graph hGraphExec. The node is identified by the corresponding node hNode in the non-executable graph, from which the executable graph was instantiated.

hNode must not have been removed from the original graph.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

Changing nodeParams->numExtSems is not supported.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

cuda.bindings.driver.cuGraphNodeSetEnabled(hGraphExec, hNode, unsigned int isEnabled)

Enables or disables the specified node in the given graphExec.

Sets hNode to be either enabled or disabled. Disabled nodes are functionally equivalent to empty nodes until they are reenabled. Existing node parameters are not affected by disabling/enabling the node.

The node is identified by the corresponding node hNode in the non- executable graph, from which the executable graph was instantiated.

hNode must not have been removed from the original graph.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

If hNode is a device-updatable kernel node, the next upload/launch of hGraphExec will overwrite any previous device-side updates. Additionally, applying host updates to a device-updatable kernel node while it is being updated from the device will result in undefined behavior.

Parameters:
  • hGraphExec (CUgraphExec or cudaGraphExec_t) – The executable graph in which to set the specified node

  • hNode (CUgraphNode or cudaGraphNode_t) – Node from the graph from which graphExec was instantiated

  • isEnabled (unsigned int) – Node is enabled if != 0, otherwise the node is disabled

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE,

Return type:

CUresult

Notes

Currently only kernel, memset and memcpy nodes are supported.

cuda.bindings.driver.cuGraphNodeGetEnabled(hGraphExec, hNode)

Query whether a node in the given graphExec is enabled.

Sets isEnabled to 1 if hNode is enabled, or 0 if hNode is disabled.

The node is identified by the corresponding node hNode in the non- executable graph, from which the executable graph was instantiated.

hNode must not have been removed from the original graph.

Parameters:
Returns:

Notes

Currently only kernel, memset and memcpy nodes are supported.

This function will not reflect device-side updates for device-updatable kernel nodes.

cuda.bindings.driver.cuGraphUpload(hGraphExec, hStream)

Uploads an executable graph in a stream.

Uploads hGraphExec to the device in hStream without executing it. Uploads of the same hGraphExec will be serialized. Each upload is ordered behind both any previous work in hStream and any previous launches of hGraphExec. Uses memory cached by stream to back the allocations owned by hGraphExec.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphLaunch(hGraphExec, hStream)

Launches an executable graph in a stream.

Executes hGraphExec in hStream. Only one instance of hGraphExec may be executing at a time. Each launch is ordered behind both any previous work in hStream and any previous launches of hGraphExec. To execute a graph concurrently, it must be instantiated multiple times into multiple executable graphs.

If any allocations created by hGraphExec remain unfreed (from a previous launch) and hGraphExec was not instantiated with CUDA_GRAPH_INSTANTIATE_FLAG_AUTO_FREE_ON_LAUNCH, the launch will fail with CUDA_ERROR_INVALID_VALUE.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphExecDestroy(hGraphExec)

Destroys an executable graph.

Destroys the executable graph specified by hGraphExec, as well as all of its executable nodes. If the executable graph is in-flight, it will not be terminated, but rather freed asynchronously on completion.

Parameters:

hGraphExec (CUgraphExec or cudaGraphExec_t) – Executable graph to destroy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphDestroy(hGraph)

Destroys a graph.

Destroys the graph specified by hGraph, as well as all of its nodes.

Parameters:

hGraph (CUgraph or cudaGraph_t) – Graph to destroy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

See also

cuGraphCreate

cuda.bindings.driver.cuGraphExecUpdate(hGraphExec, hGraph)

Check whether an executable graph can be updated with a graph and perform the update if possible.

Updates the node parameters in the instantiated graph specified by hGraphExec with the node parameters in a topologically identical graph specified by hGraph.

Limitations:

  • Kernel nodes:

    • The owning context of the function cannot change.

    • A node whose function originally did not use CUDA dynamic parallelism cannot be updated to a function which uses CDP.

    • A node whose function originally did not make device-side update calls cannot be updated to a function which makes device-side update calls.

    • A cooperative node cannot be updated to a non-cooperative node, and vice-versa.

    • If the graph was instantiated with CUDA_GRAPH_INSTANTIATE_FLAG_USE_NODE_PRIORITY, the priority attribute cannot change. Equality is checked on the originally requested priority values, before they are clamped to the device’s supported range.

    • If hGraphExec was not instantiated for device launch, a node whose function originally did not use device-side cudaGraphLaunch() cannot be updated to a function which uses device-side cudaGraphLaunch() unless the node resides on the same context as nodes which contained such calls at instantiate-time. If no such calls were present at instantiation, these updates cannot be performed at all.

    • Neither hGraph nor hGraphExec may contain device-updatable kernel nodes.

  • Memset and memcpy nodes:

    • The CUDA device(s) to which the operand(s) was allocated/mapped cannot change.

    • The source/destination memory must be allocated from the same contexts as the original source/destination memory.

    • For 2d memsets, only address and assinged value may be updated.

    • For 1d memsets, updating dimensions is also allowed, but may fail if the resulting operation doesn’t map onto the work resources already allocated for the node.

  • Additional memcpy node restrictions:

    • Changing either the source or destination memory type(i.e. CU_MEMORYTYPE_DEVICE, CU_MEMORYTYPE_ARRAY, etc.) is not supported.

  • External semaphore wait nodes and record nodes:

    • Changing the number of semaphores is not supported.

  • Conditional nodes:

    • Changing node parameters is not supported.

    • Changeing parameters of nodes within the conditional body graph is subject to the rules above.

    • Conditional handle flags and default values are updated as part of the graph update.

Note: The API may add further restrictions in future releases. The return code should always be checked.

cuGraphExecUpdate sets the result member of resultInfo to CU_GRAPH_EXEC_UPDATE_ERROR_TOPOLOGY_CHANGED under the following conditions:

  • The count of nodes directly in hGraphExec and hGraph differ, in which case resultInfo->errorNode is set to NULL.

  • hGraph has more exit nodes than hGraph, in which case resultInfo->errorNode is set to one of the exit nodes in hGraph.

  • A node in hGraph has a different number of dependencies than the node from hGraphExec it is paired with, in which case resultInfo->errorNode is set to the node from hGraph.

  • A node in hGraph has a dependency that does not match with the corresponding dependency of the paired node from hGraphExec. resultInfo->errorNode will be set to the node from hGraph. resultInfo->errorFromNode will be set to the mismatched dependency. The dependencies are paired based on edge order and a dependency does not match when the nodes are already paired based on other edges examined in the graph.

cuGraphExecUpdate sets the result member of resultInfo to:

  • CU_GRAPH_EXEC_UPDATE_ERROR if passed an invalid value.

  • CU_GRAPH_EXEC_UPDATE_ERROR_TOPOLOGY_CHANGED if the graph topology changed

  • CU_GRAPH_EXEC_UPDATE_ERROR_NODE_TYPE_CHANGED if the type of a node changed, in which case hErrorNode_out is set to the node from hGraph.

  • CU_GRAPH_EXEC_UPDATE_ERROR_UNSUPPORTED_FUNCTION_CHANGE if the function changed in an unsupported way(see note above), in which case hErrorNode_out is set to the node from hGraph

  • CU_GRAPH_EXEC_UPDATE_ERROR_PARAMETERS_CHANGED if any parameters to a node changed in a way that is not supported, in which case hErrorNode_out is set to the node from hGraph.

  • CU_GRAPH_EXEC_UPDATE_ERROR_ATTRIBUTES_CHANGED if any attributes of a node changed in a way that is not supported, in which case hErrorNode_out is set to the node from hGraph.

  • CU_GRAPH_EXEC_UPDATE_ERROR_NOT_SUPPORTED if something about a node is unsupported, like the node’s type or configuration, in which case hErrorNode_out is set to the node from hGraph

If the update fails for a reason not listed above, the result member of resultInfo will be set to CU_GRAPH_EXEC_UPDATE_ERROR. If the update succeeds, the result member will be set to CU_GRAPH_EXEC_UPDATE_SUCCESS.

cuGraphExecUpdate returns CUDA_SUCCESS when the updated was performed successfully. It returns CUDA_ERROR_GRAPH_EXEC_UPDATE_FAILURE if the graph update was not performed because it included changes which violated constraints specific to instantiated graph update.

Parameters:
Returns:

cuda.bindings.driver.cuGraphKernelNodeCopyAttributes(dst, src)

Copies attributes from source node to destination node.

Copies attributes from source node src to destination node dst. Both node must have the same context.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphKernelNodeGetAttribute(hNode, attr: CUkernelNodeAttrID)

Queries node attribute.

Queries attribute attr from node hNode and stores it in corresponding member of value_out.

Parameters:
Returns:

cuda.bindings.driver.cuGraphKernelNodeSetAttribute(hNode, attr: CUkernelNodeAttrID, CUkernelNodeAttrValue value: Optional[CUkernelNodeAttrValue])

Sets node attribute.

Sets attribute attr on node hNode from corresponding attribute of value.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE

Return type:

CUresult

cuda.bindings.driver.cuGraphDebugDotPrint(hGraph, char *path, unsigned int flags)

Write a DOT file describing graph structure.

Using the provided hGraph, write to path a DOT formatted description of the graph. By default this includes the graph topology, node types, node id, kernel names and memcpy direction. flags can be specified to write more detailed information about each node type such as parameter values, kernel attributes, node and function handles.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – The graph to create a DOT file from

  • path (bytes) – The path to write the DOT file to

  • flags (unsigned int) – Flags from CUgraphDebugDot_flags for specifying which additional node information to write

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_OPERATING_SYSTEM

Return type:

CUresult

cuda.bindings.driver.cuUserObjectCreate(ptr, destroy, unsigned int initialRefcount, unsigned int flags)

Create a user object.

Create a user object with the specified destructor callback and initial reference count. The initial references are owned by the caller.

Destructor callbacks cannot make CUDA API calls and should avoid blocking behavior, as they are executed by a shared internal thread. Another thread may be signaled to perform such actions, if it does not block forward progress of tasks scheduled through CUDA.

See CUDA User Objects in the CUDA C++ Programming Guide for more information on user objects.

Parameters:
  • ptr (Any) – The pointer to pass to the destroy function

  • destroy (CUhostFn) – Callback to free the user object when it is no longer in use

  • initialRefcount (unsigned int) – The initial refcount to create the object with, typically 1. The initial references are owned by the calling thread.

  • flags (unsigned int) – Currently it is required to pass CU_USER_OBJECT_NO_DESTRUCTOR_SYNC, which is the only defined flag. This indicates that the destroy callback cannot be waited on by any CUDA API. Users requiring synchronization of the callback should signal its completion manually.

Returns:

cuda.bindings.driver.cuUserObjectRetain(object, unsigned int count)

Retain a reference to a user object.

Retains new references to a user object. The new references are owned by the caller.

See CUDA User Objects in the CUDA C++ Programming Guide for more information on user objects.

Parameters:
  • object (CUuserObject) – The object to retain

  • count (unsigned int) – The number of references to retain, typically 1. Must be nonzero and not larger than INT_MAX.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuUserObjectRelease(object, unsigned int count)

Release a reference to a user object.

Releases user object references owned by the caller. The object’s destructor is invoked if the reference count reaches zero.

It is undefined behavior to release references not owned by the caller, or to use a user object handle after all references are released.

See CUDA User Objects in the CUDA C++ Programming Guide for more information on user objects.

Parameters:
  • object (CUuserObject) – The object to release

  • count (unsigned int) – The number of references to release, typically 1. Must be nonzero and not larger than INT_MAX.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphRetainUserObject(graph, object, unsigned int count, unsigned int flags)

Retain a reference to a user object from a graph.

Creates or moves user object references that will be owned by a CUDA graph.

See CUDA User Objects in the CUDA C++ Programming Guide for more information on user objects.

Parameters:
  • graph (CUgraph or cudaGraph_t) – The graph to associate the reference with

  • object (CUuserObject) – The user object to retain a reference for

  • count (unsigned int) – The number of references to add to the graph, typically 1. Must be nonzero and not larger than INT_MAX.

  • flags (unsigned int) – The optional flag CU_GRAPH_USER_OBJECT_MOVE transfers references from the calling thread, rather than create new references. Pass 0 to create new references.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphReleaseUserObject(graph, object, unsigned int count)

Release a user object reference from a graph.

Releases user object references owned by a graph.

See CUDA User Objects in the CUDA C++ Programming Guide for more information on user objects.

Parameters:
  • graph (CUgraph or cudaGraph_t) – The graph that will release the reference

  • object (CUuserObject) – The user object to release a reference for

  • count (unsigned int) – The number of references to release, typically 1. Must be nonzero and not larger than INT_MAX.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuGraphAddNode(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], size_t numDependencies, CUgraphNodeParams nodeParams: Optional[CUgraphNodeParams])

Adds a node of arbitrary type to a graph.

Creates a new node in hGraph described by nodeParams with numDependencies dependencies specified via dependencies. numDependencies may be 0. dependencies may be null if numDependencies is 0. dependencies may not have any duplicate entries.

nodeParams is a tagged union. The node type should be specified in the typename field, and type-specific parameters in the corresponding union member. All unused bytes - that is, reserved0 and all bytes past the utilized union member - must be set to zero. It is recommended to use brace initialization or memset to ensure all bytes are initialized.

Note that for some node types, nodeParams may contain “out parameters” which are modified during the call, such as nodeParams->alloc.dptr.

A handle to the new node will be returned in phGraphNode.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • numDependencies (size_t) – Number of dependencies

  • nodeParams (CUgraphNodeParams) – Specification of the node

Returns:

cuda.bindings.driver.cuGraphAddNode_v2(hGraph, dependencies: Optional[Tuple[CUgraphNode] | List[CUgraphNode]], dependencyData: Optional[Tuple[CUgraphEdgeData] | List[CUgraphEdgeData]], size_t numDependencies, CUgraphNodeParams nodeParams: Optional[CUgraphNodeParams])

Adds a node of arbitrary type to a graph (12.3+)

Creates a new node in hGraph described by nodeParams with numDependencies dependencies specified via dependencies. numDependencies may be 0. dependencies may be null if numDependencies is 0. dependencies may not have any duplicate entries.

nodeParams is a tagged union. The node type should be specified in the typename field, and type-specific parameters in the corresponding union member. All unused bytes - that is, reserved0 and all bytes past the utilized union member - must be set to zero. It is recommended to use brace initialization or memset to ensure all bytes are initialized.

Note that for some node types, nodeParams may contain “out parameters” which are modified during the call, such as nodeParams->alloc.dptr.

A handle to the new node will be returned in phGraphNode.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph to which to add the node

  • dependencies (List[CUgraphNode]) – Dependencies of the node

  • dependencyData (List[CUgraphEdgeData]) – Optional edge data for the dependencies. If NULL, the data is assumed to be default (zeroed) for all dependencies.

  • numDependencies (size_t) – Number of dependencies

  • nodeParams (CUgraphNodeParams) – Specification of the node

Returns:

cuda.bindings.driver.cuGraphNodeSetParams(hNode, CUgraphNodeParams nodeParams: Optional[CUgraphNodeParams])

Update’s a graph node’s parameters.

Sets the parameters of graph node hNode to nodeParams. The node type specified by nodeParams->type must match the type of hNode. nodeParams must be fully initialized and all unused bytes (reserved, padding) zeroed.

Modifying parameters is not supported for node types CU_GRAPH_NODE_TYPE_MEM_ALLOC and CU_GRAPH_NODE_TYPE_MEM_FREE.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuGraphExecNodeSetParams(hGraphExec, hNode, CUgraphNodeParams nodeParams: Optional[CUgraphNodeParams])

Update’s a graph node’s parameters in an instantiated graph.

Sets the parameters of a node in an executable graph hGraphExec. The node is identified by the corresponding node hNode in the non- executable graph from which the executable graph was instantiated. hNode must not have been removed from the original graph.

The modifications only affect future launches of hGraphExec. Already enqueued or running launches of hGraphExec are not affected by this call. hNode is also not modified by this call.

Allowed changes to parameters on executable graphs are as follows:

View CUDA Toolkit Documentation for a table example

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_NOT_SUPPORTED

Return type:

CUresult

cuda.bindings.driver.cuGraphConditionalHandleCreate(hGraph, ctx, unsigned int defaultLaunchValue, unsigned int flags)

Create a conditional handle.

Creates a conditional handle associated with hGraph.

The conditional handle must be associated with a conditional node in this graph or one of its children.

Handles not associated with a conditional node may cause graph instantiation to fail.

Handles can only be set from the context with which they are associated.

Parameters:
  • hGraph (CUgraph or cudaGraph_t) – Graph which will contain the conditional node using this handle.

  • ctx (CUcontext) – Context for the handle and associated conditional node.

  • defaultLaunchValue (unsigned int) – Optional initial value for the conditional variable.

  • flags (unsigned int) – Currently must be CU_GRAPH_COND_ASSIGN_DEFAULT or 0.

Returns:

See also

cuGraphAddNode

Occupancy

This section describes the occupancy calculation functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuOccupancyMaxActiveBlocksPerMultiprocessor(func, int blockSize, size_t dynamicSMemSize)

Returns occupancy of a function.

Returns in *numBlocks the number of the maximum active blocks per streaming multiprocessor.

Note that the API can also be used with context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then passing it to the API by casting to CUfunction. Here, the context to use for calculations will be the current context.

Parameters:
  • func (CUfunction) – Kernel for which occupancy is calculated

  • blockSize (int) – Block size the kernel is intended to be launched with

  • dynamicSMemSize (size_t) – Per-block dynamic shared memory usage intended, in bytes

Returns:

cuda.bindings.driver.cuOccupancyMaxActiveBlocksPerMultiprocessorWithFlags(func, int blockSize, size_t dynamicSMemSize, unsigned int flags)

Returns occupancy of a function.

Returns in *numBlocks the number of the maximum active blocks per streaming multiprocessor.

The Flags parameter controls how special cases are handled. The valid flags are:

Note that the API can also be with launch context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then passing it to the API by casting to CUfunction. Here, the context to use for calculations will be the current context.

Parameters:
  • func (CUfunction) – Kernel for which occupancy is calculated

  • blockSize (int) – Block size the kernel is intended to be launched with

  • dynamicSMemSize (size_t) – Per-block dynamic shared memory usage intended, in bytes

  • flags (unsigned int) – Requested behavior for the occupancy calculator

Returns:

cuda.bindings.driver.cuOccupancyMaxPotentialBlockSize(func, blockSizeToDynamicSMemSize, size_t dynamicSMemSize, int blockSizeLimit)

Suggest a launch configuration with reasonable occupancy.

Returns in *blockSize a reasonable block size that can achieve the maximum occupancy (or, the maximum number of active warps with the fewest blocks per multiprocessor), and in *minGridSize the minimum grid size to achieve the maximum occupancy.

If blockSizeLimit is 0, the configurator will use the maximum block size permitted by the device / function instead.

If per-block dynamic shared memory allocation is not needed, the user should leave both blockSizeToDynamicSMemSize and dynamicSMemSize as 0.

If per-block dynamic shared memory allocation is needed, then if the dynamic shared memory size is constant regardless of block size, the size should be passed through dynamicSMemSize, and blockSizeToDynamicSMemSize should be NULL.

Otherwise, if the per-block dynamic shared memory size varies with different block sizes, the user needs to provide a unary function through blockSizeToDynamicSMemSize that computes the dynamic shared memory needed by func for any given block size. dynamicSMemSize is ignored. An example signature is:

View CUDA Toolkit Documentation for a C++ code example

Note that the API can also be used with context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then passing it to the API by casting to CUfunction. Here, the context to use for calculations will be the current context.

Parameters:
  • func (CUfunction) – Kernel for which launch configuration is calculated

  • blockSizeToDynamicSMemSize (CUoccupancyB2DSize) – A function that calculates how much per-block dynamic shared memory func uses based on the block size

  • dynamicSMemSize (size_t) – Dynamic shared memory usage intended, in bytes

  • blockSizeLimit (int) – The maximum block size func is designed to handle

Returns:

See also

cudaOccupancyMaxPotentialBlockSize

cuda.bindings.driver.cuOccupancyMaxPotentialBlockSizeWithFlags(func, blockSizeToDynamicSMemSize, size_t dynamicSMemSize, int blockSizeLimit, unsigned int flags)

Suggest a launch configuration with reasonable occupancy.

An extended version of cuOccupancyMaxPotentialBlockSize. In addition to arguments passed to cuOccupancyMaxPotentialBlockSize, cuOccupancyMaxPotentialBlockSizeWithFlags also takes a Flags parameter.

The Flags parameter controls how special cases are handled. The valid flags are:

Note that the API can also be used with context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then passing it to the API by casting to CUfunction. Here, the context to use for calculations will be the current context.

Parameters:
  • func (CUfunction) – Kernel for which launch configuration is calculated

  • blockSizeToDynamicSMemSize (CUoccupancyB2DSize) – A function that calculates how much per-block dynamic shared memory func uses based on the block size

  • dynamicSMemSize (size_t) – Dynamic shared memory usage intended, in bytes

  • blockSizeLimit (int) – The maximum block size func is designed to handle

  • flags (unsigned int) – Options

Returns:

See also

cudaOccupancyMaxPotentialBlockSizeWithFlags

cuda.bindings.driver.cuOccupancyAvailableDynamicSMemPerBlock(func, int numBlocks, int blockSize)

Returns dynamic shared memory available per block when launching numBlocks blocks on SM.

Returns in *dynamicSmemSize the maximum size of dynamic shared memory to allow numBlocks blocks per SM.

Note that the API can also be used with context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then passing it to the API by casting to CUfunction. Here, the context to use for calculations will be the current context.

Parameters:
  • func (CUfunction) – Kernel function for which occupancy is calculated

  • numBlocks (int) – Number of blocks to fit on SM

  • blockSize (int) – Size of the blocks

Returns:

cuda.bindings.driver.cuOccupancyMaxPotentialClusterSize(func, CUlaunchConfig config: Optional[CUlaunchConfig])

Given the kernel function (func) and launch configuration (config), return the maximum cluster size in *clusterSize.

The cluster dimensions in config are ignored. If func has a required cluster size set (see cudaFuncGetAttributes / cuFuncGetAttribute),`*clusterSize` will reflect the required cluster size.

By default this function will always return a value that’s portable on future hardware. A higher value may be returned if the kernel function allows non-portable cluster sizes.

This function will respect the compile time launch bounds.

Note that the API can also be used with context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then passing it to the API by casting to CUfunction. Here, the context to use for calculations will either be taken from the specified stream config->hStream or the current context in case of NULL stream.

Parameters:
  • func (CUfunction) – Kernel function for which maximum cluster size is calculated

  • config (CUlaunchConfig) – Launch configuration for the given kernel function

Returns:

cuda.bindings.driver.cuOccupancyMaxActiveClusters(func, CUlaunchConfig config: Optional[CUlaunchConfig])

Given the kernel function (func) and launch configuration (config), return the maximum number of clusters that could co-exist on the target device in *numClusters.

If the function has required cluster size already set (see cudaFuncGetAttributes / cuFuncGetAttribute), the cluster size from config must either be unspecified or match the required size. Without required sizes, the cluster size must be specified in config, else the function will return an error.

Note that various attributes of the kernel function may affect occupancy calculation. Runtime environment may affect how the hardware schedules the clusters, so the calculated occupancy is not guaranteed to be achievable.

Note that the API can also be used with context-less kernel CUkernel by querying the handle using cuLibraryGetKernel() and then passing it to the API by casting to CUfunction. Here, the context to use for calculations will either be taken from the specified stream config->hStream or the current context in case of NULL stream.

Parameters:
  • func (CUfunction) – Kernel function for which maximum number of clusters are calculated

  • config (CUlaunchConfig) – Launch configuration for the given kernel function

Returns:

Texture Object Management

This section describes the texture object management functions of the low-level CUDA driver application programming interface. The texture object API is only supported on devices of compute capability 3.0 or higher.

cuda.bindings.driver.cuTexObjectCreate(CUDA_RESOURCE_DESC pResDesc: Optional[CUDA_RESOURCE_DESC], CUDA_TEXTURE_DESC pTexDesc: Optional[CUDA_TEXTURE_DESC], CUDA_RESOURCE_VIEW_DESC pResViewDesc: Optional[CUDA_RESOURCE_VIEW_DESC])

Creates a texture object.

Creates a texture object and returns it in pTexObject. pResDesc describes the data to texture from. pTexDesc describes how the data should be sampled. pResViewDesc is an optional argument that specifies an alternate format for the data described by pResDesc, and also describes the subresource region to restrict access to when texturing. pResViewDesc can only be specified if the type of resource is a CUDA array or a CUDA mipmapped array not in a block compressed format.

Texture objects are only supported on devices of compute capability 3.0 or higher. Additionally, a texture object is an opaque value, and, as such, should only be accessed through CUDA API calls.

The CUDA_RESOURCE_DESC structure is defined as:

View CUDA Toolkit Documentation for a C++ code example

where:

  • resType specifies the type of resource to texture from. CUresourceType is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

If resType is set to CU_RESOURCE_TYPE_ARRAY, CUDA_RESOURCE_DESC::res::array::hArray must be set to a valid CUDA array handle.

If resType is set to CU_RESOURCE_TYPE_MIPMAPPED_ARRAY, CUDA_RESOURCE_DESC::res::mipmap::hMipmappedArray must be set to a valid CUDA mipmapped array handle.

If resType is set to CU_RESOURCE_TYPE_LINEAR, CUDA_RESOURCE_DESC::res::linear::devPtr must be set to a valid device pointer, that is aligned to CU_DEVICE_ATTRIBUTE_TEXTURE_ALIGNMENT. CUDA_RESOURCE_DESC::res::linear::format and CUDA_RESOURCE_DESC::res::linear::numChannels describe the format of each component and the number of components per array element. CUDA_RESOURCE_DESC::res::linear::sizeInBytes specifies the size of the array in bytes. The total number of elements in the linear address range cannot exceed CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE1D_LINEAR_WIDTH. The number of elements is computed as (sizeInBytes / (sizeof(format) * numChannels)).

If resType is set to CU_RESOURCE_TYPE_PITCH2D, CUDA_RESOURCE_DESC::res::pitch2D::devPtr must be set to a valid device pointer, that is aligned to CU_DEVICE_ATTRIBUTE_TEXTURE_ALIGNMENT. CUDA_RESOURCE_DESC::res::pitch2D::format and CUDA_RESOURCE_DESC::res::pitch2D::numChannels describe the format of each component and the number of components per array element. CUDA_RESOURCE_DESC::res::pitch2D::width and CUDA_RESOURCE_DESC::res::pitch2D::height specify the width and height of the array in elements, and cannot exceed CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LINEAR_WIDTH and CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LINEAR_HEIGHT respectively. CUDA_RESOURCE_DESC::res::pitch2D::pitchInBytes specifies the pitch between two rows in bytes and has to be aligned to CU_DEVICE_ATTRIBUTE_TEXTURE_PITCH_ALIGNMENT. Pitch cannot exceed CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LINEAR_PITCH.

  • flags must be set to zero.

The CUDA_TEXTURE_DESC struct is defined as

View CUDA Toolkit Documentation for a C++ code example

where

  • addressMode specifies the addressing mode for each dimension of the texture data. CUaddress_mode is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • This is ignored if resType is CU_RESOURCE_TYPE_LINEAR. Also, if the flag, CU_TRSF_NORMALIZED_COORDINATES is not set, the only supported address mode is CU_TR_ADDRESS_MODE_CLAMP.

  • filterMode specifies the filtering mode to be used when fetching from the texture. CUfilter_mode is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • This is ignored if resType is CU_RESOURCE_TYPE_LINEAR.

  • flags can be any combination of the following:

    • CU_TRSF_READ_AS_INTEGER, which suppresses the default behavior of having the texture promote integer data to floating point data in the range [0, 1]. Note that texture with 32-bit integer format would not be promoted, regardless of whether or not this flag is specified.

    • CU_TRSF_NORMALIZED_COORDINATES, which suppresses the default behavior of having the texture coordinates range from [0, Dim) where Dim is the width or height of the CUDA array. Instead, the texture coordinates [0, 1.0) reference the entire breadth of the array dimension; Note that for CUDA mipmapped arrays, this flag has to be set.

    • CU_TRSF_DISABLE_TRILINEAR_OPTIMIZATION, which disables any trilinear filtering optimizations. Trilinear optimizations improve texture filtering performance by allowing bilinear filtering on textures in scenarios where it can closely approximate the expected results.

    • CU_TRSF_SEAMLESS_CUBEMAP, which enables seamless cube map filtering. This flag can only be specified if the underlying resource is a CUDA array or a CUDA mipmapped array that was created with the flag CUDA_ARRAY3D_CUBEMAP. When seamless cube map filtering is enabled, texture address modes specified by addressMode are ignored. Instead, if the filterMode is set to CU_TR_FILTER_MODE_POINT the address mode CU_TR_ADDRESS_MODE_CLAMP will be applied for all dimensions. If the filterMode is set to CU_TR_FILTER_MODE_LINEAR seamless cube map filtering will be performed when sampling along the cube face borders.

  • maxAnisotropy specifies the maximum anisotropy ratio to be used when doing anisotropic filtering. This value will be clamped to the range [1,16].

  • mipmapFilterMode specifies the filter mode when the calculated mipmap level lies between two defined mipmap levels.

  • mipmapLevelBias specifies the offset to be applied to the calculated mipmap level.

  • minMipmapLevelClamp specifies the lower end of the mipmap level range to clamp access to.

  • maxMipmapLevelClamp specifies the upper end of the mipmap level range to clamp access to.

The CUDA_RESOURCE_VIEW_DESC struct is defined as

View CUDA Toolkit Documentation for a C++ code example

where:

  • format specifies how the data contained in the CUDA array or CUDA mipmapped array should be interpreted. Note that this can incur a change in size of the texture data. If the resource view format is a block compressed format, then the underlying CUDA array or CUDA mipmapped array has to have a base of format CU_AD_FORMAT_UNSIGNED_INT32. with 2 or 4 channels, depending on the block compressed format. For ex., BC1 and BC4 require the underlying CUDA array to have a format of CU_AD_FORMAT_UNSIGNED_INT32 with 2 channels. The other BC formats require the underlying resource to have the same base format but with 4 channels.

  • width specifies the new width of the texture data. If the resource view format is a block compressed format, this value has to be 4 times the original width of the resource. For non block compressed formats, this value has to be equal to that of the original resource.

  • height specifies the new height of the texture data. If the resource view format is a block compressed format, this value has to be 4 times the original height of the resource. For non block compressed formats, this value has to be equal to that of the original resource.

  • depth specifies the new depth of the texture data. This value has to be equal to that of the original resource.

  • firstMipmapLevel specifies the most detailed mipmap level. This will be the new mipmap level zero. For non-mipmapped resources, this value has to be zero.:py:obj:~.CUDA_TEXTURE_DESC.minMipmapLevelClamp and maxMipmapLevelClamp will be relative to this value. For ex., if the firstMipmapLevel is set to 2, and a minMipmapLevelClamp of 1.2 is specified, then the actual minimum mipmap level clamp will be 3.2.

  • lastMipmapLevel specifies the least detailed mipmap level. For non-mipmapped resources, this value has to be zero.

  • firstLayer specifies the first layer index for layered textures. This will be the new layer zero. For non-layered resources, this value has to be zero.

  • lastLayer specifies the last layer index for layered textures. For non-layered resources, this value has to be zero.

Parameters:
Returns:

cuda.bindings.driver.cuTexObjectDestroy(texObject)

Destroys a texture object.

Destroys the texture object specified by texObject.

Parameters:

texObject (CUtexObject) – Texture object to destroy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuTexObjectGetResourceDesc(texObject)

Returns a texture object’s resource descriptor.

Returns the resource descriptor for the texture object specified by texObject.

Parameters:

texObject (CUtexObject) – Texture object

Returns:

cuda.bindings.driver.cuTexObjectGetTextureDesc(texObject)

Returns a texture object’s texture descriptor.

Returns the texture descriptor for the texture object specified by texObject.

Parameters:

texObject (CUtexObject) – Texture object

Returns:

cuda.bindings.driver.cuTexObjectGetResourceViewDesc(texObject)

Returns a texture object’s resource view descriptor.

Returns the resource view descriptor for the texture object specified by texObject. If no resource view was set for texObject, the CUDA_ERROR_INVALID_VALUE is returned.

Parameters:

texObject (CUtexObject) – Texture object

Returns:

Surface Object Management

This section describes the surface object management functions of the low-level CUDA driver application programming interface. The surface object API is only supported on devices of compute capability 3.0 or higher.

cuda.bindings.driver.cuSurfObjectCreate(CUDA_RESOURCE_DESC pResDesc: Optional[CUDA_RESOURCE_DESC])

Creates a surface object.

Creates a surface object and returns it in pSurfObject. pResDesc describes the data to perform surface load/stores on. resType must be CU_RESOURCE_TYPE_ARRAY and CUDA_RESOURCE_DESC::res::array::hArray must be set to a valid CUDA array handle. flags must be set to zero.

Surface objects are only supported on devices of compute capability 3.0 or higher. Additionally, a surface object is an opaque value, and, as such, should only be accessed through CUDA API calls.

Parameters:

pResDesc (CUDA_RESOURCE_DESC) – Resource descriptor

Returns:

cuda.bindings.driver.cuSurfObjectDestroy(surfObject)

Destroys a surface object.

Destroys the surface object specified by surfObject.

Parameters:

surfObject (CUsurfObject) – Surface object to destroy

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuSurfObjectGetResourceDesc(surfObject)

Returns a surface object’s resource descriptor.

Returns the resource descriptor for the surface object specified by surfObject.

Parameters:

surfObject (CUsurfObject) – Surface object

Returns:

Tensor Map Object Managment

This section describes the tensor map object management functions of the low-level CUDA driver application programming interface. The tensor core API is only supported on devices of compute capability 9.0 or higher.

cuda.bindings.driver.cuTensorMapEncodeTiled(tensorDataType: CUtensorMapDataType, tensorRank, globalAddress, globalDim: Tuple[cuuint64_t] | List[cuuint64_t] | None, globalStrides: Tuple[cuuint64_t] | List[cuuint64_t] | None, boxDim: Tuple[cuuint32_t] | List[cuuint32_t] | None, elementStrides: Tuple[cuuint32_t] | List[cuuint32_t] | None, interleave: CUtensorMapInterleave, swizzle: CUtensorMapSwizzle, l2Promotion: CUtensorMapL2promotion, oobFill: CUtensorMapFloatOOBfill)

Create a tensor map descriptor object representing tiled memory region.

Creates a descriptor for Tensor Memory Access (TMA) object specified by the parameters describing a tiled region and returns it in tensorMap.

Tensor map objects are only supported on devices of compute capability 9.0 or higher. Additionally, a tensor map object is an opaque value, and, as such, should only be accessed through CUDA API calls.

The parameters passed are bound to the following requirements:

  • tensorMap address must be aligned to 64 bytes.

  • tensorDataType has to be an enum from CUtensorMapDataType which is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • tensorRank must be non-zero and less than or equal to the maximum supported dimensionality of 5. If interleave is not CU_TENSOR_MAP_INTERLEAVE_NONE, then tensorRank must additionally be greater than or equal to 3.

  • globalAddress, which specifies the starting address of the memory region described, must be 32 byte aligned when interleave is CU_TENSOR_MAP_INTERLEAVE_32B and 16 byte aligned otherwise.

  • globalDim array, which specifies tensor size of each of the tensorRank dimensions, must be non-zero and less than or equal to 2^32.

  • globalStrides array, which specifies tensor stride of each of the lower tensorRank - 1 dimensions in bytes, must be a multiple of 16 and less than 2^40. Additionally, the stride must be a multiple of 32 when interleave is CU_TENSOR_MAP_INTERLEAVE_32B. Each following dimension specified includes previous dimension stride:

  • View CUDA Toolkit Documentation for a C++ code example

  • boxDim array, which specifies number of elements to be traversed along each of the tensorRank dimensions, must be non-zero and less than or equal to 256. When interleave is CU_TENSOR_MAP_INTERLEAVE_NONE, { boxDim`[0] * elementSizeInBytes( `tensorDataType ) } must be a multiple of 16 bytes.

  • elementStrides array, which specifies the iteration step along each of the tensorRank dimensions, must be non-zero and less than or equal to 8. Note that when interleave is CU_TENSOR_MAP_INTERLEAVE_NONE, the first element of this array is ignored since TMA doesn’t support the stride for dimension zero. When all elements of elementStrides array is one, boxDim specifies the number of elements to load. However, if the `elementStrides`[i] is not equal to one, then TMA loads ceil( `boxDim`[i] / `elementStrides`[i]) number of elements along i-th dimension. To load N elements along i-th dimension, `boxDim`[i] must be set to N * `elementStrides`[i].

  • interleave specifies the interleaved layout of type CUtensorMapInterleave, which is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • TMA supports interleaved layouts like NC/8HWC8 where C8 utilizes 16 bytes in memory assuming 2 byte per channel or NC/16HWC16 where C16 uses 32 bytes. When interleave is CU_TENSOR_MAP_INTERLEAVE_NONE and swizzle is not CU_TENSOR_MAP_SWIZZLE_NONE, the bounding box inner dimension (computed as boxDim`[0] multiplied by element size derived from `tensorDataType) must be less than or equal to the swizzle size.

    • CU_TENSOR_MAP_SWIZZLE_32B implies the bounding box inner dimension will be <= 32.

    • CU_TENSOR_MAP_SWIZZLE_64B implies the bounding box inner dimension will be <= 64.

    • CU_TENSOR_MAP_SWIZZLE_128B implies the bounding box inner dimension will be <= 128.

  • swizzle, which specifies the shared memory bank swizzling pattern, has to be of type CUtensorMapSwizzle which is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • Data are organized in a specific order in global memory; however, this may not match the order in which the application accesses data in shared memory. This difference in data organization may cause bank conflicts when shared memory is accessed. In order to avoid this problem, data can be loaded to shared memory with shuffling across shared memory banks. When interleave is CU_TENSOR_MAP_INTERLEAVE_32B, swizzle must be CU_TENSOR_MAP_SWIZZLE_32B. Other interleave modes can have any swizzling pattern.

  • l2Promotion specifies L2 fetch size which indicates the byte granurality at which L2 requests is filled from DRAM. It must be of type CUtensorMapL2promotion, which is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • oobFill, which indicates whether zero or a special NaN constant should be used to fill out-of-bound elements, must be of type CUtensorMapFloatOOBfill which is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • Note that CU_TENSOR_MAP_FLOAT_OOB_FILL_NAN_REQUEST_ZERO_FMA can only be used when tensorDataType represents a floating-point data type.

Parameters:
  • tensorDataType (CUtensorMapDataType) – Tensor data type

  • tensorRank (Any) – Dimensionality of tensor

  • globalAddress (Any) – Starting address of memory region described by tensor

  • globalDim (List[cuuint64_t]) – Array containing tensor size (number of elements) along each of the tensorRank dimensions

  • globalStrides (List[cuuint64_t]) – Array containing stride size (in bytes) along each of the tensorRank - 1 dimensions

  • boxDim (List[cuuint32_t]) – Array containing traversal box size (number of elments) along each of the tensorRank dimensions. Specifies how many elements to be traversed along each tensor dimension.

  • elementStrides (List[cuuint32_t]) – Array containing traversal stride in each of the tensorRank dimensions

  • interleave (CUtensorMapInterleave) – Type of interleaved layout the tensor addresses

  • swizzle (CUtensorMapSwizzle) – Bank swizzling pattern inside shared memory

  • l2Promotion (CUtensorMapL2promotion) – L2 promotion size

  • oobFill (CUtensorMapFloatOOBfill) – Indicate whether zero or special NaN constant must be used to fill out-of-bound elements

Returns:

cuda.bindings.driver.cuTensorMapEncodeIm2col(tensorDataType: CUtensorMapDataType, tensorRank, globalAddress, globalDim: Tuple[cuuint64_t] | List[cuuint64_t] | None, globalStrides: Tuple[cuuint64_t] | List[cuuint64_t] | None, pixelBoxLowerCorner: Tuple[int] | List[int] | None, pixelBoxUpperCorner: Tuple[int] | List[int] | None, channelsPerPixel, pixelsPerColumn, elementStrides: Tuple[cuuint32_t] | List[cuuint32_t] | None, interleave: CUtensorMapInterleave, swizzle: CUtensorMapSwizzle, l2Promotion: CUtensorMapL2promotion, oobFill: CUtensorMapFloatOOBfill)

Create a tensor map descriptor object representing im2col memory region.

Creates a descriptor for Tensor Memory Access (TMA) object specified by the parameters describing a im2col memory layout and returns it in tensorMap.

Tensor map objects are only supported on devices of compute capability 9.0 or higher. Additionally, a tensor map object is an opaque value, and, as such, should only be accessed through CUDA API calls.

The parameters passed are bound to the following requirements:

  • tensorMap address must be aligned to 64 bytes.

  • tensorDataType has to be an enum from CUtensorMapDataType which is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • tensorRank, which specifies the number of tensor dimensions, must be 3, 4, or 5.

  • globalAddress, which specifies the starting address of the memory region described, must be 32 byte aligned when interleave is CU_TENSOR_MAP_INTERLEAVE_32B and 16 byte aligned otherwise.

  • globalDim array, which specifies tensor size of each of the tensorRank dimensions, must be non-zero and less than or equal to 2^32.

  • globalStrides array, which specifies tensor stride of each of the lower tensorRank - 1 dimensions in bytes, must be a multiple of 16 and less than 2^40. Additionally, the stride must be a multiple of 32 when interleave is CU_TENSOR_MAP_INTERLEAVE_32B. Each following dimension specified includes previous dimension stride:

  • View CUDA Toolkit Documentation for a C++ code example

  • pixelBoxLowerCorner array specifies the coordinate offsets {D, H, W} of the bounding box from top/left/front corner. The number of offsets and their precision depend on the tensor dimensionality:

    • When tensorRank is 3, one signed offset within range [-32768, 32767] is supported.

    • When tensorRank is 4, two signed offsets each within range [-128, 127] are supported.

    • When tensorRank is 5, three offsets each within range [-16, 15] are supported.

  • pixelBoxUpperCorner array specifies the coordinate offsets {D, H, W} of the bounding box from bottom/right/back corner. The number of offsets and their precision depend on the tensor dimensionality:

    • When tensorRank is 3, one signed offset within range [-32768, 32767] is supported.

    • When tensorRank is 4, two signed offsets each within range [-128, 127] are supported.

    • When tensorRank is 5, three offsets each within range [-16, 15] are supported. The bounding box specified by pixelBoxLowerCorner and pixelBoxUpperCorner must have non-zero area.

  • channelsPerPixel, which specifies the number of elements which must be accessed along C dimension, must be less than or equal to 256.

  • pixelsPerColumn, which specifies the number of elements that must be accessed along the {N, D, H, W} dimensions, must be less than or equal to 1024.

  • elementStrides array, which specifies the iteration step along each of the tensorRank dimensions, must be non-zero and less than or equal to 8. Note that when interleave is CU_TENSOR_MAP_INTERLEAVE_NONE, the first element of this array is ignored since TMA doesn’t support the stride for dimension zero. When all elements of the elementStrides array are one, boxDim specifies the number of elements to load. However, if elementStrides`[i] is not equal to one for some `i, then TMA loads ceil( `boxDim`[i] / `elementStrides`[i]) number of elements along i-th dimension. To load N elements along i-th dimension, `boxDim`[i] must be set to N * `elementStrides`[i].

  • interleave specifies the interleaved layout of type CUtensorMapInterleave, which is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • TMA supports interleaved layouts like NC/8HWC8 where C8 utilizes 16 bytes in memory assuming 2 byte per channel or NC/16HWC16 where C16 uses 32 bytes. When interleave is CU_TENSOR_MAP_INTERLEAVE_NONE and swizzle is not CU_TENSOR_MAP_SWIZZLE_NONE, the bounding box inner dimension (computed as boxDim`[0] multiplied by element size derived from `tensorDataType) must be less than or equal to the swizzle size.

    • CU_TENSOR_MAP_SWIZZLE_32B implies the bounding box inner dimension will be <= 32.

    • CU_TENSOR_MAP_SWIZZLE_64B implies the bounding box inner dimension will be <= 64.

    • CU_TENSOR_MAP_SWIZZLE_128B implies the bounding box inner dimension will be <= 128.

  • swizzle, which specifies the shared memory bank swizzling pattern, has to be of type CUtensorMapSwizzle which is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • Data are organized in a specific order in global memory; however, this may not match the order in which the application accesses data in shared memory. This difference in data organization may cause bank conflicts when shared memory is accessed. In order to avoid this problem, data can be loaded to shared memory with shuffling across shared memory banks. When interleave is CU_TENSOR_MAP_INTERLEAVE_32B, swizzle must be CU_TENSOR_MAP_SWIZZLE_32B. Other interleave modes can have any swizzling pattern.

  • l2Promotion specifies L2 fetch size which indicates the byte granularity at which L2 requests are filled from DRAM. It must be of type CUtensorMapL2promotion, which is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • oobFill, which indicates whether zero or a special NaN constant should be used to fill out-of-bound elements, must be of type CUtensorMapFloatOOBfill which is defined as:

  • View CUDA Toolkit Documentation for a C++ code example

  • Note that CU_TENSOR_MAP_FLOAT_OOB_FILL_NAN_REQUEST_ZERO_FMA can only be used when tensorDataType represents a floating-point data type.

Parameters:
  • tensorDataType (CUtensorMapDataType) – Tensor data type

  • tensorRank (Any) – Dimensionality of tensor; must be at least 3

  • globalAddress (Any) – Starting address of memory region described by tensor

  • globalDim (List[cuuint64_t]) – Array containing tensor size (number of elements) along each of the tensorRank dimensions

  • globalStrides (List[cuuint64_t]) – Array containing stride size (in bytes) along each of the tensorRank - 1 dimensions

  • pixelBoxLowerCorner (List[int]) – Array containing DHW dimensions of lower box corner

  • pixelBoxUpperCorner (List[int]) – Array containing DHW dimensions of upper box corner

  • channelsPerPixel (Any) – Number of channels per pixel

  • pixelsPerColumn (Any) – Number of pixels per column

  • elementStrides (List[cuuint32_t]) – Array containing traversal stride in each of the tensorRank dimensions

  • interleave (CUtensorMapInterleave) – Type of interleaved layout the tensor addresses

  • swizzle (CUtensorMapSwizzle) – Bank swizzling pattern inside shared memory

  • l2Promotion (CUtensorMapL2promotion) – L2 promotion size

  • oobFill (CUtensorMapFloatOOBfill) – Indicate whether zero or special NaN constant will be used to fill out-of-bound elements

Returns:

cuda.bindings.driver.cuTensorMapReplaceAddress(CUtensorMap tensorMap: Optional[CUtensorMap], globalAddress)

Modify an existing tensor map descriptor with an updated global address.

Modifies the descriptor for Tensor Memory Access (TMA) object passed in tensorMap with an updated globalAddress.

Tensor map objects are only supported on devices of compute capability 9.0 or higher. Additionally, a tensor map object is an opaque value, and, as such, should only be accessed through CUDA API calls.

Parameters:
  • tensorMap (CUtensorMap) – Tensor map object to modify

  • globalAddress (Any) – Starting address of memory region described by tensor, must follow previous alignment requirements

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

Peer Context Memory Access

This section describes the direct peer context memory access functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuDeviceCanAccessPeer(dev, peerDev)

Queries if a device may directly access a peer device’s memory.

Returns in *canAccessPeer a value of 1 if contexts on dev are capable of directly accessing memory from contexts on peerDev and 0 otherwise. If direct access of peerDev from dev is possible, then access may be enabled on two specific contexts by calling cuCtxEnablePeerAccess().

Parameters:
  • dev (CUdevice) – Device from which allocations on peerDev are to be directly accessed.

  • peerDev (CUdevice) – Device on which the allocations to be directly accessed by dev reside.

Returns:

cuda.bindings.driver.cuCtxEnablePeerAccess(peerContext, unsigned int Flags)

Enables direct access to memory allocations in a peer context.

If both the current context and peerContext are on devices which support unified addressing (as may be queried using CU_DEVICE_ATTRIBUTE_UNIFIED_ADDRESSING) and same major compute capability, then on success all allocations from peerContext will immediately be accessible by the current context. See Unified Addressing for additional details.

Note that access granted by this call is unidirectional and that in order to access memory from the current context in peerContext, a separate symmetric call to cuCtxEnablePeerAccess() is required.

Note that there are both device-wide and system-wide limitations per system configuration, as noted in the CUDA Programming Guide under the section “Peer-to-Peer Memory Access”.

Returns CUDA_ERROR_PEER_ACCESS_UNSUPPORTED if cuDeviceCanAccessPeer() indicates that the CUdevice of the current context cannot directly access memory from the CUdevice of peerContext.

Returns CUDA_ERROR_PEER_ACCESS_ALREADY_ENABLED if direct access of peerContext from the current context has already been enabled.

Returns CUDA_ERROR_TOO_MANY_PEERS if direct peer access is not possible because hardware resources required for peer access have been exhausted.

Returns CUDA_ERROR_INVALID_CONTEXT if there is no current context, peerContext is not a valid context, or if the current context is peerContext.

Returns CUDA_ERROR_INVALID_VALUE if Flags is not 0.

Parameters:
  • peerContext (CUcontext) – Peer context to enable direct access to from the current context

  • Flags (unsigned int) – Reserved for future use and must be set to 0

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_PEER_ACCESS_ALREADY_ENABLED, CUDA_ERROR_TOO_MANY_PEERS, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_PEER_ACCESS_UNSUPPORTED, CUDA_ERROR_INVALID_VALUE

Return type:

CUresult

cuda.bindings.driver.cuCtxDisablePeerAccess(peerContext)

Disables direct access to memory allocations in a peer context and unregisters any registered allocations.

Returns CUDA_ERROR_PEER_ACCESS_NOT_ENABLED if direct peer access has not yet been enabled from peerContext to the current context.

Returns CUDA_ERROR_INVALID_CONTEXT if there is no current context, or if peerContext is not a valid context.

Parameters:

peerContext (CUcontext) – Peer context to disable direct access to

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_PEER_ACCESS_NOT_ENABLED, CUDA_ERROR_INVALID_CONTEXT,

Return type:

CUresult

cuda.bindings.driver.cuDeviceGetP2PAttribute(attrib: CUdevice_P2PAttribute, srcDevice, dstDevice)

Queries attributes of the link between two devices.

Returns in *value the value of the requested attribute attrib of the link between srcDevice and dstDevice. The supported attributes are:

Returns CUDA_ERROR_INVALID_DEVICE if srcDevice or dstDevice are not valid or if they represent the same device.

Returns CUDA_ERROR_INVALID_VALUE if attrib is not valid or if value is a null pointer.

Parameters:
  • attrib (CUdevice_P2PAttribute) – The requested attribute of the link between srcDevice and dstDevice.

  • srcDevice (CUdevice) – The source device of the target link.

  • dstDevice (CUdevice) – The destination device of the target link.

Returns:

Graphics Interoperability

This section describes the graphics interoperability functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuGraphicsUnregisterResource(resource)

Unregisters a graphics resource for access by CUDA.

Unregisters the graphics resource resource so it is not accessible by CUDA unless registered again.

If resource is invalid then CUDA_ERROR_INVALID_HANDLE is returned.

Parameters:

resource (CUgraphicsResource) – Resource to unregister

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_UNKNOWN

Return type:

CUresult

See also

cuGraphicsD3D9RegisterResource, cuGraphicsD3D10RegisterResource, cuGraphicsD3D11RegisterResource, cuGraphicsGLRegisterBuffer, cuGraphicsGLRegisterImage, cudaGraphicsUnregisterResource

cuda.bindings.driver.cuGraphicsSubResourceGetMappedArray(resource, unsigned int arrayIndex, unsigned int mipLevel)

Get an array through which to access a subresource of a mapped graphics resource.

Returns in *pArray an array through which the subresource of the mapped graphics resource resource which corresponds to array index arrayIndex and mipmap level mipLevel may be accessed. The value set in *pArray may change every time that resource is mapped.

If resource is not a texture then it cannot be accessed via an array and CUDA_ERROR_NOT_MAPPED_AS_ARRAY is returned. If arrayIndex is not a valid array index for resource then CUDA_ERROR_INVALID_VALUE is returned. If mipLevel is not a valid mipmap level for resource then CUDA_ERROR_INVALID_VALUE is returned. If resource is not mapped then CUDA_ERROR_NOT_MAPPED is returned.

Parameters:
  • resource (CUgraphicsResource) – Mapped resource to access

  • arrayIndex (unsigned int) – Array index for array textures or cubemap face index as defined by CUarray_cubemap_face for cubemap textures for the subresource to access

  • mipLevel (unsigned int) – Mipmap level for the subresource to access

Returns:

cuda.bindings.driver.cuGraphicsResourceGetMappedMipmappedArray(resource)

Get a mipmapped array through which to access a mapped graphics resource.

Returns in *pMipmappedArray a mipmapped array through which the mapped graphics resource resource. The value set in *pMipmappedArray may change every time that resource is mapped.

If resource is not a texture then it cannot be accessed via a mipmapped array and CUDA_ERROR_NOT_MAPPED_AS_ARRAY is returned. If resource is not mapped then CUDA_ERROR_NOT_MAPPED is returned.

Parameters:

resource (CUgraphicsResource) – Mapped resource to access

Returns:

cuda.bindings.driver.cuGraphicsResourceGetMappedPointer(resource)

Get a device pointer through which to access a mapped graphics resource.

Returns in *pDevPtr a pointer through which the mapped graphics resource resource may be accessed. Returns in pSize the size of the memory in bytes which may be accessed from that pointer. The value set in pPointer may change every time that resource is mapped.

If resource is not a buffer then it cannot be accessed via a pointer and CUDA_ERROR_NOT_MAPPED_AS_POINTER is returned. If resource is not mapped then CUDA_ERROR_NOT_MAPPED is returned.

Parameters:

resource (CUgraphicsResource) – None

Returns:

  • CUresult

  • pDevPtr (CUdeviceptr) – None

  • pSize (int) – None

cuda.bindings.driver.cuGraphicsResourceSetMapFlags(resource, unsigned int flags)

Set usage flags for mapping a graphics resource.

Set flags for mapping the graphics resource resource.

Changes to flags will take effect the next time resource is mapped. The flags argument may be any of the following:

  • CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE: Specifies no hints about how this resource will be used. It is therefore assumed that this resource will be read from and written to by CUDA kernels. This is the default value.

  • CU_GRAPHICS_MAP_RESOURCE_FLAGS_READONLY: Specifies that CUDA kernels which access this resource will not write to this resource.

  • CU_GRAPHICS_MAP_RESOURCE_FLAGS_WRITEDISCARD: Specifies that CUDA kernels which access this resource will not read from this resource and will write over the entire contents of the resource, so none of the data previously stored in the resource will be preserved.

If resource is presently mapped for access by CUDA then CUDA_ERROR_ALREADY_MAPPED is returned. If flags is not one of the above values then CUDA_ERROR_INVALID_VALUE is returned.

Parameters:
  • resource (CUgraphicsResource) – Registered resource to set flags for

  • flags (unsigned int) – Parameters for resource mapping

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_VALUE, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_ALREADY_MAPPED

Return type:

CUresult

cuda.bindings.driver.cuGraphicsMapResources(unsigned int count, resources, hStream)

Map graphics resources for access by CUDA.

Maps the count graphics resources in resources for access by CUDA.

The resources in resources may be accessed by CUDA until they are unmapped. The graphics API from which resources were registered should not access any resources while they are mapped by CUDA. If an application does so, the results are undefined.

This function provides the synchronization guarantee that any graphics calls issued before cuGraphicsMapResources() will complete before any subsequent CUDA work issued in stream begins.

If resources includes any duplicate entries then CUDA_ERROR_INVALID_HANDLE is returned. If any of resources are presently mapped for access by CUDA then CUDA_ERROR_ALREADY_MAPPED is returned.

Parameters:
  • count (unsigned int) – Number of resources to map

  • resources (CUgraphicsResource) – Resources to map for CUDA usage

  • hStream (CUstream or cudaStream_t) – Stream with which to synchronize

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_ALREADY_MAPPED, CUDA_ERROR_UNKNOWN

Return type:

CUresult

cuda.bindings.driver.cuGraphicsUnmapResources(unsigned int count, resources, hStream)

Unmap graphics resources.

Unmaps the count graphics resources in resources.

Once unmapped, the resources in resources may not be accessed by CUDA until they are mapped again.

This function provides the synchronization guarantee that any CUDA work issued in stream before cuGraphicsUnmapResources() will complete before any subsequently issued graphics work begins.

If resources includes any duplicate entries then CUDA_ERROR_INVALID_HANDLE is returned. If any of resources are not presently mapped for access by CUDA then CUDA_ERROR_NOT_MAPPED is returned.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_NOT_MAPPED, CUDA_ERROR_UNKNOWN

Return type:

CUresult

Driver Entry Point Access

This section describes the driver entry point access functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuGetProcAddress(char *symbol, int cudaVersion, flags)

Returns the requested driver API function pointer.

Returns in **pfn the address of the CUDA driver function for the requested CUDA version and flags.

The CUDA version is specified as (1000 * major + 10 * minor), so CUDA 11.2 should be specified as 11020. For a requested driver symbol, if the specified CUDA version is greater than or equal to the CUDA version in which the driver symbol was introduced, this API will return the function pointer to the corresponding versioned function.

The pointer returned by the API should be cast to a function pointer matching the requested driver function’s definition in the API header file. The function pointer typedef can be picked up from the corresponding typedefs header file. For example, cudaTypedefs.h consists of function pointer typedefs for driver APIs defined in h.

The API will return CUDA_SUCCESS and set the returned pfn to NULL if the requested driver function is not supported on the platform, no ABI compatible driver function exists for the specified cudaVersion or if the driver symbol is invalid.

It will also set the optional symbolStatus to one of the values in CUdriverProcAddressQueryResult with the following meanings:

The requested flags can be:

Parameters:
  • symbol (bytes) – The base name of the driver API function to look for. As an example, for the driver API cuMemAlloc_v2, symbol would be cuMemAlloc and cudaVersion would be the ABI compatible CUDA version for the _v2 variant.

  • cudaVersion (int) – The CUDA version to look for the requested driver symbol

  • flags (Any) – Flags to specify search options.

Returns:

Coredump Attributes Control API

This section describes the coredump attribute control functions of the low-level CUDA driver application programming interface.

class cuda.bindings.driver.CUcoredumpSettings(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for choosing a coredump attribute to get/set

CU_COREDUMP_ENABLE_ON_EXCEPTION = 1
CU_COREDUMP_TRIGGER_HOST = 2
CU_COREDUMP_LIGHTWEIGHT = 3
CU_COREDUMP_ENABLE_USER_TRIGGER = 4
CU_COREDUMP_FILE = 5
CU_COREDUMP_PIPE = 6
CU_COREDUMP_GENERATION_FLAGS = 7
CU_COREDUMP_MAX = 8
class cuda.bindings.driver.CUCoredumpGenerationFlags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Flags for controlling coredump contents

CU_COREDUMP_DEFAULT_FLAGS = 0
CU_COREDUMP_SKIP_NONRELOCATED_ELF_IMAGES = 1
CU_COREDUMP_SKIP_GLOBAL_MEMORY = 2
CU_COREDUMP_SKIP_SHARED_MEMORY = 4
CU_COREDUMP_SKIP_LOCAL_MEMORY = 8
CU_COREDUMP_SKIP_ABORT = 16
CU_COREDUMP_SKIP_CONSTBANK_MEMORY = 32
CU_COREDUMP_LIGHTWEIGHT_FLAGS = 47
cuda.bindings.driver.cuCoredumpGetAttribute(attrib: CUcoredumpSettings)

Allows caller to fetch a coredump attribute value for the current context.

Returns in *value the requested value specified by attrib. It is up to the caller to ensure that the data type and size of *value matches the request.

If the caller calls this function with *value equal to NULL, the size of the memory region (in bytes) expected for attrib will be placed in size.

The supported attributes are:

  • CU_COREDUMP_ENABLE_ON_EXCEPTION: Bool where true means that GPU exceptions from this context will create a coredump at the location specified by CU_COREDUMP_FILE. The default value is false unless set to true globally or locally, or the CU_CTX_USER_COREDUMP_ENABLE flag was set during context creation.

  • CU_COREDUMP_TRIGGER_HOST: Bool where true means that the host CPU will also create a coredump. The default value is true unless set to false globally or or locally. This value is deprecated as of CUDA 12.5 - raise the CU_COREDUMP_SKIP_ABORT flag to disable host device abort() if needed.

  • CU_COREDUMP_LIGHTWEIGHT: Bool where true means that any resulting coredumps will not have a dump of GPU memory or non-reloc ELF images. The default value is false unless set to true globally or locally. This attribute is deprecated as of CUDA 12.5, please use CU_COREDUMP_GENERATION_FLAGS instead.

  • CU_COREDUMP_ENABLE_USER_TRIGGER: Bool where true means that a coredump can be created by writing to the system pipe specified by CU_COREDUMP_PIPE. The default value is false unless set to true globally or locally.

  • CU_COREDUMP_FILE: String of up to 1023 characters that defines the location where any coredumps generated by this context will be written. The default value is core.cuda.HOSTNAME.PID where HOSTNAME is the host name of the machine running the CUDA applications and PID is the process ID of the CUDA application.

  • CU_COREDUMP_PIPE: String of up to 1023 characters that defines the name of the pipe that will be monitored if user-triggered coredumps are enabled. The default value is corepipe.cuda.HOSTNAME.PID where HOSTNAME is the host name of the machine running the CUDA application and PID is the process ID of the CUDA application.

  • CU_COREDUMP_GENERATION_FLAGS: An integer with values to allow granular control the data contained in a coredump specified as a bitwise OR combination of the following values:

Parameters:
  • attrib (CUcoredumpSettings) – The enum defining which value to fetch.

  • size (int) – The size of the memory region value points to.

Returns:

cuda.bindings.driver.cuCoredumpGetAttributeGlobal(attrib: CUcoredumpSettings)

Allows caller to fetch a coredump attribute value for the entire application.

Returns in *value the requested value specified by attrib. It is up to the caller to ensure that the data type and size of *value matches the request.

If the caller calls this function with *value equal to NULL, the size of the memory region (in bytes) expected for attrib will be placed in size.

The supported attributes are:

  • CU_COREDUMP_ENABLE_ON_EXCEPTION: Bool where true means that GPU exceptions from this context will create a coredump at the location specified by CU_COREDUMP_FILE. The default value is false.

  • CU_COREDUMP_TRIGGER_HOST: Bool where true means that the host CPU will also create a coredump. The default value is true unless set to false globally or or locally. This value is deprecated as of CUDA 12.5 - raise the CU_COREDUMP_SKIP_ABORT flag to disable host device abort() if needed.

  • CU_COREDUMP_LIGHTWEIGHT: Bool where true means that any resulting coredumps will not have a dump of GPU memory or non-reloc ELF images. The default value is false. This attribute is deprecated as of CUDA 12.5, please use CU_COREDUMP_GENERATION_FLAGS instead.

  • CU_COREDUMP_ENABLE_USER_TRIGGER: Bool where true means that a coredump can be created by writing to the system pipe specified by CU_COREDUMP_PIPE. The default value is false.

  • CU_COREDUMP_FILE: String of up to 1023 characters that defines the location where any coredumps generated by this context will be written. The default value is core.cuda.HOSTNAME.PID where HOSTNAME is the host name of the machine running the CUDA applications and PID is the process ID of the CUDA application.

  • CU_COREDUMP_PIPE: String of up to 1023 characters that defines the name of the pipe that will be monitored if user-triggered coredumps are enabled. The default value is corepipe.cuda.HOSTNAME.PID where HOSTNAME is the host name of the machine running the CUDA application and PID is the process ID of the CUDA application.

  • CU_COREDUMP_GENERATION_FLAGS: An integer with values to allow granular control the data contained in a coredump specified as a bitwise OR combination of the following values:

Parameters:
  • attrib (CUcoredumpSettings) – The enum defining which value to fetch.

  • size (int) – The size of the memory region value points to.

Returns:

cuda.bindings.driver.cuCoredumpSetAttribute(attrib: CUcoredumpSettings, value)

Allows caller to set a coredump attribute value for the current context.

This function should be considered an alternate interface to the CUDA- GDB environment variables defined in this document: https://docs.nvidia.com/cuda/cuda-gdb/index.html#gpu-coredump

An important design decision to note is that any coredump environment variable values set before CUDA initializes will take permanent precedence over any values set with this function. This decision was made to ensure no change in behavior for any users that may be currently using these variables to get coredumps.

*value shall contain the requested value specified by set. It is up to the caller to ensure that the data type and size of *value matches the request.

If the caller calls this function with *value equal to NULL, the size of the memory region (in bytes) expected for set will be placed in size.

/note This function will return CUDA_ERROR_NOT_SUPPORTED if the caller attempts to set CU_COREDUMP_ENABLE_ON_EXCEPTION on a GPU of with Compute Capability < 6.0. cuCoredumpSetAttributeGlobal works on those platforms as an alternative.

/note CU_COREDUMP_ENABLE_USER_TRIGGER and CU_COREDUMP_PIPE cannot be set on a per-context basis.

The supported attributes are:

Parameters:
  • attrib (CUcoredumpSettings) – The enum defining which value to set.

  • value (Any) – void* containing the requested data.

  • size (int) – The size of the memory region value points to.

Returns:

cuda.bindings.driver.cuCoredumpSetAttributeGlobal(attrib: CUcoredumpSettings, value)

Allows caller to set a coredump attribute value globally.

This function should be considered an alternate interface to the CUDA- GDB environment variables defined in this document: https://docs.nvidia.com/cuda/cuda-gdb/index.html#gpu-coredump

An important design decision to note is that any coredump environment variable values set before CUDA initializes will take permanent precedence over any values set with this function. This decision was made to ensure no change in behavior for any users that may be currently using these variables to get coredumps.

*value shall contain the requested value specified by set. It is up to the caller to ensure that the data type and size of *value matches the request.

If the caller calls this function with *value equal to NULL, the size of the memory region (in bytes) expected for set will be placed in size.

The supported attributes are:

  • CU_COREDUMP_ENABLE_ON_EXCEPTION: Bool where true means that GPU exceptions from this context will create a coredump at the location specified by CU_COREDUMP_FILE. The default value is false.

  • CU_COREDUMP_TRIGGER_HOST: Bool where true means that the host CPU will also create a coredump. The default value is true unless set to false globally or or locally. This value is deprecated as of CUDA 12.5 - raise the CU_COREDUMP_SKIP_ABORT flag to disable host device abort() if needed.

  • CU_COREDUMP_LIGHTWEIGHT: Bool where true means that any resulting coredumps will not have a dump of GPU memory or non-reloc ELF images. The default value is false. This attribute is deprecated as of CUDA 12.5, please use CU_COREDUMP_GENERATION_FLAGS instead.

  • CU_COREDUMP_ENABLE_USER_TRIGGER: Bool where true means that a coredump can be created by writing to the system pipe specified by CU_COREDUMP_PIPE. The default value is false.

  • CU_COREDUMP_FILE: String of up to 1023 characters that defines the location where any coredumps generated by this context will be written. The default value is core.cuda.HOSTNAME.PID where HOSTNAME is the host name of the machine running the CUDA applications and PID is the process ID of the CUDA application.

  • CU_COREDUMP_PIPE: String of up to 1023 characters that defines the name of the pipe that will be monitored if user-triggered coredumps are enabled. This value may not be changed after CU_COREDUMP_ENABLE_USER_TRIGGER is set to true. The default value is corepipe.cuda.HOSTNAME.PID where HOSTNAME is the host name of the machine running the CUDA application and PID is the process ID of the CUDA application.

  • CU_COREDUMP_GENERATION_FLAGS: An integer with values to allow granular control the data contained in a coredump specified as a bitwise OR combination of the following values:

Parameters:
  • attrib (CUcoredumpSettings) – The enum defining which value to set.

  • value (Any) – void* containing the requested data.

  • size (int) – The size of the memory region value points to.

Returns:

Green Contexts

This section describes the APIs for creation and manipulation of green contexts in the CUDA driver. Green contexts are a lightweight alternative to traditional contexts, with the ability to pass in a set of resources that they should be initialized with. This allows the developer to represent distinct spatial partitions of the GPU, provision resources for them, and target them via the same programming model that CUDA exposes (streams, kernel launches, etc.).

There are 4 main steps to using these new set of APIs.

    1. Start with an initial set of resources, for example via cuDeviceGetDevResource. Only SM type is supported today.

    1. Partition this set of resources by providing them as input to a partition API, for example: cuDevSmResourceSplitByCount.

    1. Finalize the specification of resources by creating a descriptor via cuDevResourceGenerateDesc.

    1. Provision the resources and create a green context via cuGreenCtxCreate.

For CU_DEV_RESOURCE_TYPE_SM, the partitions created have minimum SM count requirements, often rounding up and aligning the minCount provided to cuDevSmResourceSplitByCount. The following is a guideline for each architecture and may be subject to change:

  • On Compute Architecture 6.X: The minimum count is 1 SM.

  • On Compute Architecture 7.X: The minimum count is 2 SMs and must be a multiple of 2.

  • On Compute Architecture 8.X: The minimum count is 4 SMs and must be a multiple of 2.

  • On Compute Architecture 9.0+: The minimum count is 8 SMs and must be a multiple of 8.

In the future, flags can be provided to tradeoff functional and performance characteristics versus finer grained SM partitions.

Even if the green contexts have disjoint SM partitions, it is not guaranteed that the kernels launched in them will run concurrently or have forward progress guarantees. This is due to other resources (like HW connections, see ::CUDA_DEVICE_MAX_CONNECTIONS) that could cause a dependency. Additionally, in certain scenarios, it is possible for the workload to run on more SMs than was provisioned (but never less). The following are two scenarios which can exhibit this behavior:

  • On Volta+ MPS: When CUDA_MPS_ACTIVE_THREAD_PERCENTAGE is used, the set of SMs that are used for running kernels can be scaled up to the value of SMs used for the MPS client.

  • On Compute Architecture 9.x: When a module with dynamic parallelism (CDP) is loaded, all future kernels running under green contexts may use and share an additional set of 2 SMs.

class cuda.bindings.driver.CUdevSmResource_st(void_ptr _ptr=0)
smCount

The amount of streaming multiprocessors available in this resource. This is an output parameter only, do not write to this field.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUdevResource_st(void_ptr _ptr=0)
type

Type of resource, dictates which union field was last set

Type:

CUdevResourceType

_internal_padding
Type:

bytes

sm

Resource corresponding to CU_DEV_RESOURCE_TYPE_SM ``. type.

Type:

CUdevSmResource

_oversize
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUdevSmResource
smCount

The amount of streaming multiprocessors available in this resource. This is an output parameter only, do not write to this field.

Type:

unsigned int

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUdevResource
type

Type of resource, dictates which union field was last set

Type:

CUdevResourceType

_internal_padding
Type:

bytes

sm

Resource corresponding to CU_DEV_RESOURCE_TYPE_SM ``. type.

Type:

CUdevSmResource

_oversize
Type:

bytes

getPtr()

Get memory address of class instance

class cuda.bindings.driver.CUgreenCtxCreate_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
CU_GREEN_CTX_DEFAULT_STREAM = 1

Required. Creates a default stream to use inside the green context

class cuda.bindings.driver.CUdevSmResourceSplit_flags(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
CU_DEV_SM_RESOURCE_SPLIT_IGNORE_SM_COSCHEDULING = 1
CU_DEV_SM_RESOURCE_SPLIT_MAX_POTENTIAL_CLUSTER_SIZE = 2
class cuda.bindings.driver.CUdevResourceType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Type of resource

CU_DEV_RESOURCE_TYPE_INVALID = 0
CU_DEV_RESOURCE_TYPE_SM = 1

Streaming multiprocessors related information

class cuda.bindings.driver.CUdevResourceDesc(*args, **kwargs)

An opaque descriptor handle. The descriptor encapsulates multiple created and configured resources. Created via cuDevResourceGenerateDesc

getPtr()

Get memory address of class instance

cuda.bindings.driver.cuGreenCtxCreate(desc, dev, unsigned int flags)

Creates a green context with a specified set of resources.

This API creates a green context with the resources specified in the descriptor desc and returns it in the handle represented by phCtx. This API will retain the primary context on device dev, which will is released when the green context is destroyed. It is advised to have the primary context active before calling this API to avoid the heavy cost of triggering primary context initialization and deinitialization multiple times.

The API does not set the green context current. In order to set it current, you need to explicitly set it current by first converting the green context to a CUcontext using cuCtxFromGreenCtx and subsequently calling cuCtxSetCurrent / cuCtxPushCurrent. It should be noted that a green context can be current to only one thread at a time. There is no internal synchronization to make API calls accessing the same green context from multiple threads work.

Note: The API is not supported on 32-bit platforms.

The supported flags are:

  • CU_GREEN_CTX_DEFAULT_STREAM : Creates a default stream to use inside the green context. Required.

Parameters:
  • desc (CUdevResourceDesc) – Descriptor generated via cuDevResourceGenerateDesc which contains the set of resources to be used

  • dev (CUdevice) – Device on which to create the green context.

  • flags (unsigned int) – One of the supported green context creation flags. CU_GREEN_CTX_DEFAULT_STREAM is required.

Returns:

cuda.bindings.driver.cuGreenCtxDestroy(hCtx)

Destroys a green context.

Destroys the green context, releasing the primary context of the device that this green context was created for. Any resources provisioned for this green context (that were initially available via the resource descriptor) are released as well.

Parameters:

hCtx (CUgreenCtx) – Green context to be destroyed

Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_CONTEXT_IS_DESTROYED

Return type:

CUresult

cuda.bindings.driver.cuCtxFromGreenCtx(hCtx)

Converts a green context into the primary context.

The API converts a green context into the primary context returned in pContext. It is important to note that the converted context pContext is a normal primary context but with the resources of the specified green context hCtx. Once converted, it can then be used to set the context current with cuCtxSetCurrent or with any of the CUDA APIs that accept a CUcontext parameter.

Users are expected to call this API before calling any CUDA APIs that accept a CUcontext. Failing to do so will result in the APIs returning CUDA_ERROR_INVALID_CONTEXT.

Parameters:

hCtx (CUgreenCtx) – Green context to convert

Returns:

See also

cuGreenCtxCreate

cuda.bindings.driver.cuDeviceGetDevResource(device, typename: CUdevResourceType)

Get device resources.

Get the typename resources available to the device. This may often be the starting point for further partitioning or configuring of resources.

Note: The API is not supported on 32-bit platforms.

Parameters:
Returns:

cuda.bindings.driver.cuCtxGetDevResource(hCtx, typename: CUdevResourceType)

Get context resources.

Get the typename resources available to the context represented by hCtx Note: The API is not supported on 32-bit platforms.

Parameters:
Returns:

cuda.bindings.driver.cuGreenCtxGetDevResource(hCtx, typename: CUdevResourceType)

Get green context resources.

Get the typename resources available to the green context represented by hCtx

Parameters:
Returns:

cuda.bindings.driver.cuDevSmResourceSplitByCount(unsigned int nbGroups, CUdevResource input_: Optional[CUdevResource], unsigned int useFlags, unsigned int minCount)

Splits CU_DEV_RESOURCE_TYPE_SM resources.

Splits CU_DEV_RESOURCE_TYPE_SM resources into nbGroups, adhering to the minimum SM count specified in minCount and the usage flags in useFlags. If result is NULL, the API simulates a split and provides the amount of groups that would be created in nbGroups. Otherwise, nbGroups must point to the amount of elements in result and on return, the API will overwrite nbGroups with the amount actually created. The groups are written to the array in result. nbGroups can be less than the total amount if a smaller number of groups is needed.

This API is used to spatially partition the input resource. The input resource needs to come from one of cuDeviceGetDevResource, cuCtxGetDevResource, or cuGreenCtxGetDevResource. A limitation of the API is that the output results cannot be split again without first creating a descriptor and a green context with that descriptor.

When creating the groups, the API will take into account the performance and functional characteristics of the input resource, and guarantee a split that will create a disjoint set of symmetrical partitions. This may lead to fewer groups created than purely dividing the total SM count by the minCount due to cluster requirements or alignment and granularity requirements for the minCount.

The remainder set does not have the same functional or performance guarantees as the groups in result. Its use should be carefully planned and future partitions of the remainder set are discouraged.

The following flags are supported:

  • CU_DEV_SM_RESOURCE_SPLIT_IGNORE_SM_COSCHEDULING : Lower the minimum SM count and alignment, and treat each SM independent of its hierarchy. This allows more fine grained partitions but at the cost of advanced features (such as large clusters on compute capability 9.0+).

  • CU_DEV_SM_RESOURCE_SPLIT_MAX_POTENTIAL_CLUSTER_SIZE : Compute Capability 9.0+ only. Attempt to create groups that may allow for maximally sized thread clusters. This can be queried post green context creation using cuOccupancyMaxPotentialClusterSize.

A successful API call must either have:

  • A valid array of result pointers of size passed in nbGroups, with input of type CU_DEV_RESOURCE_TYPE_SM. Value of minCount must be between 0 and the SM count specified in input. remaining may be NULL.

  • NULL passed in for result, with a valid integer pointer in nbGroups and input of type CU_DEV_RESOURCE_TYPE_SM. Value of minCount must be between 0 and the SM count specified in input. remaining may be NULL. This queries the number of groups that would be created by the API.

Note: The API is not supported on 32-bit platforms.

Parameters:
  • nbGroups (unsigned int) – This is a pointer, specifying the number of groups that would be or should be created as described below.

  • input (CUdevResource) – Input SM resource to be split. Must be a valid CU_DEV_RESOURCE_TYPE_SM resource.

  • useFlags (unsigned int) – Flags specifying how these partitions are used or which constraints to abide by when splitting the input. Zero is valid for default behavior.

  • minCount (unsigned int) – Minimum number of SMs required

Returns:

cuda.bindings.driver.cuDevResourceGenerateDesc(resources: Optional[Tuple[CUdevResource] | List[CUdevResource]], unsigned int nbResources)

Generate a resource descriptor.

Generates a single resource descriptor with the set of resources specified in resources. The generated resource descriptor is necessary for the creation of green contexts via the cuGreenCtxCreate API. Resources of the same type can be passed in, provided they meet the requirements as noted below.

A successful API call must have:

  • A valid output pointer for the phDesc descriptor as well as a valid array of resources pointers, with the array size passed in nbResources. If multiple resources are provided in resources, the device they came from must be the same, otherwise CUDA_ERROR_INVALID_RESOURCE_CONFIGURATION is returned. If multiple resources are provided in resources and they are of type CU_DEV_RESOURCE_TYPE_SM, they must be outputs (whether result or remaining) from the same split API instance, otherwise CUDA_ERROR_INVALID_RESOURCE_CONFIGURATION is returned.

Note: The API is not supported on 32-bit platforms.

Parameters:
  • resources (List[CUdevResource]) – Array of resources to be included in the descriptor

  • nbResources (unsigned int) – Number of resources passed in resources

Returns:

cuda.bindings.driver.cuGreenCtxRecordEvent(hCtx, hEvent)

Records an event.

Captures in hEvent all the activities of the green context of hCtx at the time of this call. hEvent and hCtx must be from the same primary context otherwise CUDA_ERROR_INVALID_HANDLE is returned. Calls such as cuEventQuery() or cuGreenCtxWaitEvent() will then examine or wait for completion of the work that was captured. Uses of hCtx after this call do not modify hEvent.

Parameters:
Returns:

CUDA_SUCCESS CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED

Return type:

CUresult

Notes

The API will return CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED if the specified green context hCtx has a stream in the capture mode. In such a case, the call will invalidate all the conflicting captures.

cuda.bindings.driver.cuGreenCtxWaitEvent(hCtx, hEvent)

Make a green context wait on an event.

Makes all future work submitted to green context hCtx wait for all work captured in hEvent. The synchronization will be performed on the device and will not block the calling CPU thread. See cuGreenCtxRecordEvent() or cuEventRecord(), for details on what is captured by an event.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_DEINITIALIZED, CUDA_ERROR_NOT_INITIALIZED, CUDA_ERROR_INVALID_CONTEXT, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED

Return type:

CUresult

Notes

hEvent may be from a different context or device than hCtx.

The API will return CUDA_ERROR_STREAM_CAPTURE_UNSUPPORTED and invalidate the capture if the specified event hEvent is part of an ongoing capture sequence or if the specified green context hCtx has a stream in the capture mode.

cuda.bindings.driver.cuStreamGetGreenCtx(hStream)

Query the green context associated with a stream.

Returns the CUDA green context that the stream is associated with, or NULL if the stream is not associated with any green context.

The stream handle hStream can refer to any of the following:

Passing an invalid handle will result in undefined behavior.

Parameters:

hStream (CUstream or cudaStream_t) – Handle to the stream to be queried

Returns:

cuda.bindings.driver.cuGreenCtxStreamCreate(greenCtx, unsigned int flags, int priority)

Create a stream for use in the green context.

Creates a stream for use in the specified green context greenCtx and returns a handle in phStream. The stream can be destroyed by calling cuStreamDestroy(). Note that the API ignores the context that is current to the calling thread and creates a stream in the specified green context greenCtx.

The supported values for flags are:

  • CU_STREAM_NON_BLOCKING: This must be specified. It indicates that work running in the created stream may run concurrently with work in the default stream, and that the created stream should perform no implicit synchronization with the default stream.

Specifying priority affects the scheduling priority of work in the stream. Priorities provide a hint to preferentially run work with higher priority when possible, but do not preempt already-running work or provide any other functional guarantee on execution order. priority follows a convention where lower numbers represent higher priorities. ‘0’ represents default priority. The range of meaningful numerical priorities can be queried using cuCtxGetStreamPriorityRange. If the specified priority is outside the numerical range returned by cuCtxGetStreamPriorityRange, it will automatically be clamped to the lowest or the highest number in the range.

Parameters:
  • greenCtx (CUgreenCtx) – Green context for which to create the stream for

  • flags (unsigned int) – Flags for stream creation. CU_STREAM_NON_BLOCKING must be specified.

  • priority (int) – Stream priority. Lower numbers represent higher priorities. See cuCtxGetStreamPriorityRange for more information about meaningful stream priorities that can be passed.

Returns:

Notes

In the current implementation, only compute kernels launched in priority streams are affected by the stream’s priority. Stream priorities have no effect on host-to-device and device-to-host memory operations.

driver.RESOURCE_ABI_VERSION = 1
driver.RESOURCE_ABI_EXTERNAL_BYTES = 48

EGL Interoperability

This section describes the EGL interoperability functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuGraphicsEGLRegisterImage(image, unsigned int flags)

Registers an EGL image.

Registers the EGLImageKHR specified by image for access by CUDA. A handle to the registered object is returned as pCudaResource. Additional Mapping/Unmapping is not required for the registered resource and cuGraphicsResourceGetMappedEglFrame can be directly called on the pCudaResource.

The application will be responsible for synchronizing access to shared objects. The application must ensure that any pending operation which access the objects have completed before passing control to CUDA. This may be accomplished by issuing and waiting for glFinish command on all GLcontexts (for OpenGL and likewise for other APIs). The application will be also responsible for ensuring that any pending operation on the registered CUDA resource has completed prior to executing subsequent commands in other APIs accesing the same memory objects. This can be accomplished by calling cuCtxSynchronize or cuEventSynchronize (preferably).

The surface’s intended usage is specified using flags, as follows:

The EGLImageKHR is an object which can be used to create EGLImage target resource. It is defined as a void pointer. typedef void* EGLImageKHR

Parameters:
  • image (EGLImageKHR) – An EGLImageKHR image which can be used to create target resource.

  • flags (unsigned int) – Map flags

Returns:

cuda.bindings.driver.cuEGLStreamConsumerConnect(stream)

Connect CUDA to EGLStream as a consumer.

Connect CUDA as a consumer to EGLStreamKHR specified by stream.

The EGLStreamKHR is an EGL object that transfers a sequence of image frames from one API to another.

Parameters:

stream (EGLStreamKHR) – EGLStreamKHR handle

Returns:

cuda.bindings.driver.cuEGLStreamConsumerConnectWithFlags(stream, unsigned int flags)

Connect CUDA to EGLStream as a consumer with given flags.

Connect CUDA as a consumer to EGLStreamKHR specified by stream with specified flags defined by CUeglResourceLocationFlags.

The flags specify whether the consumer wants to access frames from system memory or video memory. Default is CU_EGL_RESOURCE_LOCATION_VIDMEM.

Parameters:
  • stream (EGLStreamKHR) – EGLStreamKHR handle

  • flags (unsigned int) – Flags denote intended location - system or video.

Returns:

cuda.bindings.driver.cuEGLStreamConsumerDisconnect(conn)

Disconnect CUDA as a consumer to EGLStream .

Disconnect CUDA as a consumer to EGLStreamKHR.

Parameters:

conn (CUeglStreamConnection) – Conection to disconnect.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_CONTEXT,

Return type:

CUresult

cuda.bindings.driver.cuEGLStreamConsumerAcquireFrame(conn, pCudaResource, pStream, unsigned int timeout)

Acquire an image frame from the EGLStream with CUDA as a consumer.

Acquire an image frame from EGLStreamKHR. This API can also acquire an old frame presented by the producer unless explicitly disabled by setting EGL_SUPPORT_REUSE_NV flag to EGL_FALSE during stream initialization. By default, EGLStream is created with this flag set to EGL_TRUE. cuGraphicsResourceGetMappedEglFrame can be called on pCudaResource to get CUeglFrame.

Parameters:
  • conn (CUeglStreamConnection) – Connection on which to acquire

  • pCudaResource (CUgraphicsResource) – CUDA resource on which the stream frame will be mapped for use.

  • pStream (CUstream) – CUDA stream for synchronization and any data migrations implied by CUeglResourceLocationFlags.

  • timeout (unsigned int) – Desired timeout in usec for a new frame to be acquired. If set as CUDA_EGL_INFINITE_TIMEOUT, acquire waits infinitely. After timeout occurs CUDA consumer tries to acquire an old frame if available and EGL_SUPPORT_REUSE_NV flag is set.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_LAUNCH_TIMEOUT,

Return type:

CUresult

cuda.bindings.driver.cuEGLStreamConsumerReleaseFrame(conn, pCudaResource, pStream)

Releases the last frame acquired from the EGLStream.

Release the acquired image frame specified by pCudaResource to EGLStreamKHR. If EGL_SUPPORT_REUSE_NV flag is set to EGL_TRUE, at the time of EGL creation this API doesn’t release the last frame acquired on the EGLStream. By default, EGLStream is created with this flag set to EGL_TRUE.

Parameters:
Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_HANDLE,

Return type:

CUresult

cuda.bindings.driver.cuEGLStreamProducerConnect(stream, width, height)

Connect CUDA to EGLStream as a producer.

Connect CUDA as a producer to EGLStreamKHR specified by stream.

The EGLStreamKHR is an EGL object that transfers a sequence of image frames from one API to another.

Parameters:
  • stream (EGLStreamKHR) – EGLStreamKHR handle

  • width (EGLint) – width of the image to be submitted to the stream

  • height (EGLint) – height of the image to be submitted to the stream

Returns:

cuda.bindings.driver.cuEGLStreamProducerDisconnect(conn)

Disconnect CUDA as a producer to EGLStream .

Disconnect CUDA as a producer to EGLStreamKHR.

Parameters:

conn (CUeglStreamConnection) – Conection to disconnect.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_INVALID_CONTEXT,

Return type:

CUresult

cuda.bindings.driver.cuEGLStreamProducerPresentFrame(conn, CUeglFrame eglframe: CUeglFrame, pStream)

Present a CUDA eglFrame to the EGLStream with CUDA as a producer.

When a frame is presented by the producer, it gets associated with the EGLStream and thus it is illegal to free the frame before the producer is disconnected. If a frame is freed and reused it may lead to undefined behavior.

If producer and consumer are on different GPUs (iGPU and dGPU) then frametype CU_EGL_FRAME_TYPE_ARRAY is not supported. CU_EGL_FRAME_TYPE_PITCH can be used for such cross-device applications.

The CUeglFrame is defined as:

View CUDA Toolkit Documentation for a C++ code example

For CUeglFrame of type CU_EGL_FRAME_TYPE_PITCH, the application may present sub-region of a memory allocation. In that case, the pitched pointer will specify the start address of the sub- region in the allocation and corresponding CUeglFrame fields will specify the dimensions of the sub-region.

Parameters:
  • conn (CUeglStreamConnection) – Connection on which to present the CUDA array

  • eglframe (CUeglFrame) – CUDA Eglstream Proucer Frame handle to be sent to the consumer over EglStream.

  • pStream (CUstream) – CUDA stream on which to present the frame.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_HANDLE,

Return type:

CUresult

cuda.bindings.driver.cuEGLStreamProducerReturnFrame(conn, CUeglFrame eglframe: Optional[CUeglFrame], pStream)

Return the CUDA eglFrame to the EGLStream released by the consumer.

This API can potentially return CUDA_ERROR_LAUNCH_TIMEOUT if the consumer has not returned a frame to EGL stream. If timeout is returned the application can retry.

Parameters:
  • conn (CUeglStreamConnection) – Connection on which to return

  • eglframe (CUeglFrame) – CUDA Eglstream Proucer Frame handle returned from the consumer over EglStream.

  • pStream (CUstream) – CUDA stream on which to return the frame.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_HANDLE, CUDA_ERROR_LAUNCH_TIMEOUT

Return type:

CUresult

cuda.bindings.driver.cuGraphicsResourceGetMappedEglFrame(resource, unsigned int index, unsigned int mipLevel)

Get an eglFrame through which to access a registered EGL graphics resource.

Returns in *eglFrame an eglFrame pointer through which the registered graphics resource resource may be accessed. This API can only be called for registered EGL graphics resources.

The CUeglFrame is defined as:

View CUDA Toolkit Documentation for a C++ code example

If resource is not registered then CUDA_ERROR_NOT_MAPPED is returned.

Parameters:
  • resource (CUgraphicsResource) – None

  • index (unsigned int) – None

  • mipLevel (unsigned int) – None

Returns:

cuda.bindings.driver.cuEventCreateFromEGLSync(eglSync, unsigned int flags)

Creates an event from EGLSync object.

Creates an event *phEvent from an EGLSyncKHR eglSync with the flags specified via flags. Valid flags include:

Once the eglSync gets destroyed, cuEventDestroy is the only API that can be invoked on the event.

cuEventRecord and TimingData are not supported for events created from EGLSync.

The EGLSyncKHR is an opaque handle to an EGL sync object. typedef void* EGLSyncKHR

Parameters:
  • eglSync (EGLSyncKHR) – Opaque handle to EGLSync object

  • flags (unsigned int) – Event creation flags

Returns:

OpenGL Interoperability

This section describes the OpenGL interoperability functions of the low-level CUDA driver application programming interface. Note that mapping of OpenGL resources is performed with the graphics API agnostic, resource mapping interface described in Graphics Interoperability.

class cuda.bindings.driver.CUGLDeviceList(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)

CUDA devices corresponding to an OpenGL device

CU_GL_DEVICE_LIST_ALL = 1

The CUDA devices for all GPUs used by the current OpenGL context

CU_GL_DEVICE_LIST_CURRENT_FRAME = 2

The CUDA devices for the GPUs used by the current OpenGL context in its currently rendering frame

CU_GL_DEVICE_LIST_NEXT_FRAME = 3

The CUDA devices for the GPUs to be used by the current OpenGL context in the next frame

cuda.bindings.driver.cuGraphicsGLRegisterBuffer(buffer, unsigned int Flags)

Registers an OpenGL buffer object.

Registers the buffer object specified by buffer for access by CUDA. A handle to the registered object is returned as pCudaResource. The register flags Flags specify the intended usage, as follows:

Parameters:
  • buffer (GLuint) – name of buffer object to be registered

  • Flags (unsigned int) – Register flags

Returns:

cuda.bindings.driver.cuGraphicsGLRegisterImage(image, target, unsigned int Flags)

Register an OpenGL texture or renderbuffer object.

Registers the texture or renderbuffer object specified by image for access by CUDA. A handle to the registered object is returned as pCudaResource.

target must match the type of the object, and must be one of GL_TEXTURE_2D, GL_TEXTURE_RECTANGLE, GL_TEXTURE_CUBE_MAP, GL_TEXTURE_3D, GL_TEXTURE_2D_ARRAY, or GL_RENDERBUFFER.

The register flags Flags specify the intended usage, as follows:

The following image formats are supported. For brevity’s sake, the list is abbreviated. For ex., {GL_R, GL_RG} X {8, 16} would expand to the following 4 formats {GL_R8, GL_R16, GL_RG8, GL_RG16} :

  • GL_RED, GL_RG, GL_RGBA, GL_LUMINANCE, GL_ALPHA, GL_LUMINANCE_ALPHA, GL_INTENSITY

  • {GL_R, GL_RG, GL_RGBA} X {8, 16, 16F, 32F, 8UI, 16UI, 32UI, 8I, 16I, 32I}

  • {GL_LUMINANCE, GL_ALPHA, GL_LUMINANCE_ALPHA, GL_INTENSITY} X {8, 16, 16F_ARB, 32F_ARB, 8UI_EXT, 16UI_EXT, 32UI_EXT, 8I_EXT, 16I_EXT, 32I_EXT}

The following image classes are currently disallowed:

  • Textures with borders

  • Multisampled renderbuffers

Parameters:
  • image (GLuint) – name of texture or renderbuffer object to be registered

  • target (GLenum) – Identifies the type of object specified by image

  • Flags (unsigned int) – Register flags

Returns:

cuda.bindings.driver.cuGLGetDevices(unsigned int cudaDeviceCount, deviceList: CUGLDeviceList)

Gets the CUDA devices associated with the current OpenGL context.

Returns in *pCudaDeviceCount the number of CUDA-compatible devices corresponding to the current OpenGL context. Also returns in *pCudaDevices at most cudaDeviceCount of the CUDA-compatible devices corresponding to the current OpenGL context. If any of the GPUs being used by the current OpenGL context are not CUDA capable then the call will return CUDA_ERROR_NO_DEVICE.

The deviceList argument may be any of the following: CU_GL_DEVICE_LIST_ALL: Query all devices used by the current OpenGL context. CU_GL_DEVICE_LIST_CURRENT_FRAME: Query the devices used by the current OpenGL context to render the current frame (in SLI). CU_GL_DEVICE_LIST_NEXT_FRAME: Query the devices used by the current OpenGL context to render the next frame (in SLI). Note that this is a prediction, it can’t be guaranteed that this is correct in all cases.

Parameters:
  • cudaDeviceCount (unsigned int) – The size of the output device array pCudaDevices.

  • deviceList (CUGLDeviceList) – The set of devices to return.

Returns:

  • CUresult – CUDA_SUCCESS CUDA_ERROR_NO_DEVICE CUDA_ERROR_INVALID_VALUE CUDA_ERROR_INVALID_CONTEXT CUDA_ERROR_INVALID_GRAPHICS_CONTEXT

  • pCudaDeviceCount (unsigned int) – Returned number of CUDA devices.

  • pCudaDevices (List[CUdevice]) – Returned CUDA devices.

See also

cudaGLGetDevices

Notes

This function is not supported on Mac OS X.

Profiler Control

This section describes the profiler control functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuProfilerStart()

Enable profiling.

Enables profile collection by the active profiling tool for the current context. If profiling is already enabled, then cuProfilerStart() has no effect.

cuProfilerStart and cuProfilerStop APIs are used to programmatically control the profiling granularity by allowing profiling to be done only on selective pieces of code.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_CONTEXT

Return type:

CUresult

See also

cuProfilerInitialize, cuProfilerStop, cudaProfilerStart

cuda.bindings.driver.cuProfilerStop()

Disable profiling.

Disables profile collection by the active profiling tool for the current context. If profiling is already disabled, then cuProfilerStop() has no effect.

cuProfilerStart and cuProfilerStop APIs are used to programmatically control the profiling granularity by allowing profiling to be done only on selective pieces of code.

Returns:

CUDA_SUCCESS, CUDA_ERROR_INVALID_CONTEXT

Return type:

CUresult

See also

cuProfilerInitialize, cuProfilerStart, cudaProfilerStop

VDPAU Interoperability

This section describes the VDPAU interoperability functions of the low-level CUDA driver application programming interface.

cuda.bindings.driver.cuVDPAUGetDevice(vdpDevice, vdpGetProcAddress)

Gets the CUDA device associated with a VDPAU device.

Returns in *pDevice the CUDA device associated with a vdpDevice, if applicable.

Parameters:
  • vdpDevice (VdpDevice) – A VdpDevice handle

  • vdpGetProcAddress (VdpGetProcAddress) – VDPAU’s VdpGetProcAddress function pointer

Returns:

cuda.bindings.driver.cuVDPAUCtxCreate(unsigned int flags, device, vdpDevice, vdpGetProcAddress)

Create a CUDA context for interoperability with VDPAU.

Creates a new CUDA context, initializes VDPAU interoperability, and associates the CUDA context with the calling thread. It must be called before performing any other VDPAU interoperability operations. It may fail if the needed VDPAU driver facilities are not available. For usage of the flags parameter, see cuCtxCreate().

Parameters:
  • flags (unsigned int) – Options for CUDA context creation

  • device (CUdevice) – Device on which to create the context

  • vdpDevice (VdpDevice) – The VdpDevice to interop with

  • vdpGetProcAddress (VdpGetProcAddress) – VDPAU’s VdpGetProcAddress function pointer

Returns:

cuda.bindings.driver.cuGraphicsVDPAURegisterVideoSurface(vdpSurface, unsigned int flags)

Registers a VDPAU VdpVideoSurface object.

Registers the VdpVideoSurface specified by vdpSurface for access by CUDA. A handle to the registered object is returned as pCudaResource. The surface’s intended usage is specified using flags, as follows:

The VdpVideoSurface is presented as an array of subresources that may be accessed using pointers returned by cuGraphicsSubResourceGetMappedArray. The exact number of valid arrayIndex values depends on the VDPAU surface format. The mapping is shown in the table below. mipLevel must be 0.

Parameters:
  • vdpSurface (VdpVideoSurface) – The VdpVideoSurface to be registered

  • flags (unsigned int) – Map flags

Returns:

cuda.bindings.driver.cuGraphicsVDPAURegisterOutputSurface(vdpSurface, unsigned int flags)

Registers a VDPAU VdpOutputSurface object.

Registers the VdpOutputSurface specified by vdpSurface for access by CUDA. A handle to the registered object is returned as pCudaResource. The surface’s intended usage is specified using flags, as follows:

The VdpOutputSurface is presented as an array of subresources that may be accessed using pointers returned by cuGraphicsSubResourceGetMappedArray. The exact number of valid arrayIndex values depends on the VDPAU surface format. The mapping is shown in the table below. mipLevel must be 0.

Parameters:
  • vdpSurface (VdpOutputSurface) – The VdpOutputSurface to be registered

  • flags (unsigned int) – Map flags

Returns: