Coverage for cuda / core / _utils / driver_cu_result_explanations.py: 100.00%
1 statements
« prev ^ index » next coverage.py v7.13.4, created at 2026-03-08 01:07 +0000
« prev ^ index » next coverage.py v7.13.4, created at 2026-03-08 01:07 +0000
1# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2# SPDX-License-Identifier: LicenseRef-NVIDIA-SOFTWARE-LICENSE
4# To regenerate the dictionary below run:
5# ../../../../../toolshed/reformat_cuda_enums_as_py.py /usr/local/cuda/include/cuda.h
6# Replace the dictionary below with the output.
7# Also update the CUDA Toolkit version number below.
9# CUDA Toolkit v13.1.0
10DRIVER_CU_RESULT_EXPLANATIONS = {
11 0: (
12 "The API call returned with no errors. In the case of query calls, this"
13 " also means that the operation being queried is complete (see"
14 " ::cuEventQuery() and ::cuStreamQuery())."
15 ),
16 1: (
17 "This indicates that one or more of the parameters passed to the API call"
18 " is not within an acceptable range of values."
19 ),
20 2: (
21 "The API call failed because it was unable to allocate enough memory or"
22 " other resources to perform the requested operation."
23 ),
24 3: (
25 "This indicates that the CUDA driver has not been initialized with"
26 " ::cuInit() or that initialization has failed."
27 ),
28 4: "This indicates that the CUDA driver is in the process of shutting down.",
29 5: (
30 "This indicates profiler is not initialized for this run. This can"
31 " happen when the application is running with external profiling tools"
32 " like visual profiler."
33 ),
34 6: (
35 "This error return is deprecated as of CUDA 5.0. It is no longer an error"
36 " to attempt to enable/disable the profiling via ::cuProfilerStart or"
37 " ::cuProfilerStop without initialization."
38 ),
39 7: (
40 "This error return is deprecated as of CUDA 5.0. It is no longer an error"
41 " to call cuProfilerStart() when profiling is already enabled."
42 ),
43 8: (
44 "This error return is deprecated as of CUDA 5.0. It is no longer an error"
45 " to call cuProfilerStop() when profiling is already disabled."
46 ),
47 34: (
48 "This indicates that the CUDA driver that the application has loaded is a"
49 " stub library. Applications that run with the stub rather than a real"
50 " driver loaded will result in CUDA API returning this error."
51 ),
52 36: (
53 "This indicates that the API call requires a newer CUDA driver than the one"
54 " currently installed. Users should install an updated NVIDIA CUDA driver"
55 " to allow the API call to succeed."
56 ),
57 46: (
58 "This indicates that requested CUDA device is unavailable at the current"
59 " time. Devices are often unavailable due to use of"
60 " ::CU_COMPUTEMODE_EXCLUSIVE_PROCESS or ::CU_COMPUTEMODE_PROHIBITED."
61 ),
62 100: ("This indicates that no CUDA-capable devices were detected by the installed CUDA driver."),
63 101: (
64 "This indicates that the device ordinal supplied by the user does not"
65 " correspond to a valid CUDA device or that the action requested is"
66 " invalid for the specified device."
67 ),
68 102: "This error indicates that the Grid license is not applied.",
69 200: ("This indicates that the device kernel image is invalid. This can also indicate an invalid CUDA module."),
70 201: (
71 "This most frequently indicates that there is no context bound to the"
72 " current thread. This can also be returned if the context passed to an"
73 " API call is not a valid handle (such as a context that has had"
74 " ::cuCtxDestroy() invoked on it). This can also be returned if a user"
75 " mixes different API versions (i.e. 3010 context with 3020 API calls)."
76 " See ::cuCtxGetApiVersion() for more details."
77 " This can also be returned if the green context passed to an API call"
78 " was not converted to a ::CUcontext using ::cuCtxFromGreenCtx API."
79 ),
80 202: (
81 "This indicated that the context being supplied as a parameter to the"
82 " API call was already the active context."
83 " This error return is deprecated as of CUDA 3.2. It is no longer an"
84 " error to attempt to push the active context via ::cuCtxPushCurrent()."
85 ),
86 205: "This indicates that a map or register operation has failed.",
87 206: "This indicates that an unmap or unregister operation has failed.",
88 207: ("This indicates that the specified array is currently mapped and thus cannot be destroyed."),
89 208: "This indicates that the resource is already mapped.",
90 209: (
91 "This indicates that there is no kernel image available that is suitable"
92 " for the device. This can occur when a user specifies code generation"
93 " options for a particular CUDA source file that do not include the"
94 " corresponding device configuration."
95 ),
96 210: "This indicates that a resource has already been acquired.",
97 211: "This indicates that a resource is not mapped.",
98 212: ("This indicates that a mapped resource is not available for access as an array."),
99 213: ("This indicates that a mapped resource is not available for access as a pointer."),
100 214: ("This indicates that an uncorrectable ECC error was detected during execution."),
101 215: ("This indicates that the ::CUlimit passed to the API call is not supported by the active device."),
102 216: (
103 "This indicates that the ::CUcontext passed to the API call can"
104 " only be bound to a single CPU thread at a time but is already"
105 " bound to a CPU thread."
106 ),
107 217: ("This indicates that peer access is not supported across the given devices."),
108 218: "This indicates that a PTX JIT compilation failed.",
109 219: "This indicates an error with OpenGL or DirectX context.",
110 220: ("This indicates that an uncorrectable NVLink error was detected during the execution."),
111 221: "This indicates that the PTX JIT compiler library was not found.",
112 222: "This indicates that the provided PTX was compiled with an unsupported toolchain.",
113 223: "This indicates that the PTX JIT compilation was disabled.",
114 224: ("This indicates that the ::CUexecAffinityType passed to the API call is not supported by the active device."),
115 225: (
116 "This indicates that the code to be compiled by the PTX JIT contains unsupported call to cudaDeviceSynchronize."
117 ),
118 226: (
119 "This indicates that an exception occurred on the device that is now"
120 " contained by the GPU's error containment capability. Common causes are -"
121 " a. Certain types of invalid accesses of peer GPU memory over nvlink"
122 " b. Certain classes of hardware errors"
123 " This leaves the process in an inconsistent state and any further CUDA"
124 " work will return the same error. To continue using CUDA, the process must"
125 " be terminated and relaunched."
126 ),
127 300: (
128 "This indicates that the device kernel source is invalid. This includes"
129 " compilation/linker errors encountered in device code or user error."
130 ),
131 301: "This indicates that the file specified was not found.",
132 302: "This indicates that a link to a shared object failed to resolve.",
133 303: "This indicates that initialization of a shared object failed.",
134 304: "This indicates that an OS call failed.",
135 400: (
136 "This indicates that a resource handle passed to the API call was not"
137 " valid. Resource handles are opaque types like ::CUstream and ::CUevent."
138 ),
139 401: (
140 "This indicates that a resource required by the API call is not in a"
141 " valid state to perform the requested operation."
142 ),
143 402: (
144 "This indicates an attempt was made to introspect an object in a way that"
145 " would discard semantically important information. This is either due to"
146 " the object using funtionality newer than the API version used to"
147 " introspect it or omission of optional return arguments."
148 ),
149 500: (
150 "This indicates that a named symbol was not found. Examples of symbols"
151 " are global/constant variable names, driver function names, texture names,"
152 " and surface names."
153 ),
154 600: (
155 "This indicates that asynchronous operations issued previously have not"
156 " completed yet. This result is not actually an error, but must be indicated"
157 " differently than ::CUDA_SUCCESS (which indicates completion). Calls that"
158 " may return this value include ::cuEventQuery() and ::cuStreamQuery()."
159 ),
160 700: (
161 "While executing a kernel, the device encountered a"
162 " load or store instruction on an invalid memory address."
163 " This leaves the process in an inconsistent state and any further CUDA work"
164 " will return the same error. To continue using CUDA, the process must be terminated"
165 " and relaunched."
166 ),
167 701: (
168 "This indicates that a launch did not occur because it did not have"
169 " appropriate resources. This error usually indicates that the user has"
170 " attempted to pass too many arguments to the device kernel, or the"
171 " kernel launch specifies too many threads for the kernel's register"
172 " count. Passing arguments of the wrong size (i.e. a 64-bit pointer"
173 " when a 32-bit int is expected) is equivalent to passing too many"
174 " arguments and can also result in this error."
175 ),
176 702: (
177 "This indicates that the device kernel took too long to execute. This can"
178 " only occur if timeouts are enabled - see the device attribute"
179 " ::CU_DEVICE_ATTRIBUTE_KERNEL_EXEC_TIMEOUT for more information."
180 " This leaves the process in an inconsistent state and any further CUDA work"
181 " will return the same error. To continue using CUDA, the process must be terminated"
182 " and relaunched."
183 ),
184 703: ("This error indicates a kernel launch that uses an incompatible texturing mode."),
185 704: (
186 "This error indicates that a call to ::cuCtxEnablePeerAccess() is"
187 " trying to re-enable peer access to a context which has already"
188 " had peer access to it enabled."
189 ),
190 705: (
191 "This error indicates that ::cuCtxDisablePeerAccess() is"
192 " trying to disable peer access which has not been enabled yet"
193 " via ::cuCtxEnablePeerAccess()."
194 ),
195 708: ("This error indicates that the primary context for the specified device has already been initialized."),
196 709: (
197 "This error indicates that the context current to the calling thread"
198 " has been destroyed using ::cuCtxDestroy, or is a primary context which"
199 " has not yet been initialized."
200 ),
201 710: (
202 "A device-side assert triggered during kernel execution. The context"
203 " cannot be used anymore, and must be destroyed. All existing device"
204 " memory allocations from this context are invalid and must be"
205 " reconstructed if the program is to continue using CUDA."
206 ),
207 711: (
208 "This error indicates that the hardware resources required to enable"
209 " peer access have been exhausted for one or more of the devices"
210 " passed to ::cuCtxEnablePeerAccess()."
211 ),
212 712: ("This error indicates that the memory range passed to ::cuMemHostRegister() has already been registered."),
213 713: (
214 "This error indicates that the pointer passed to ::cuMemHostUnregister()"
215 " does not correspond to any currently registered memory region."
216 ),
217 714: (
218 "While executing a kernel, the device encountered a stack error."
219 " This can be due to stack corruption or exceeding the stack size limit."
220 " This leaves the process in an inconsistent state and any further CUDA work"
221 " will return the same error. To continue using CUDA, the process must be terminated"
222 " and relaunched."
223 ),
224 715: (
225 "While executing a kernel, the device encountered an illegal instruction."
226 " This leaves the process in an inconsistent state and any further CUDA work"
227 " will return the same error. To continue using CUDA, the process must be terminated"
228 " and relaunched."
229 ),
230 716: (
231 "While executing a kernel, the device encountered a load or store instruction"
232 " on a memory address which is not aligned."
233 " This leaves the process in an inconsistent state and any further CUDA work"
234 " will return the same error. To continue using CUDA, the process must be terminated"
235 " and relaunched."
236 ),
237 717: (
238 "While executing a kernel, the device encountered an instruction"
239 " which can only operate on memory locations in certain address spaces"
240 " (global, shared, or local), but was supplied a memory address not"
241 " belonging to an allowed address space."
242 " This leaves the process in an inconsistent state and any further CUDA work"
243 " will return the same error. To continue using CUDA, the process must be terminated"
244 " and relaunched."
245 ),
246 718: (
247 "While executing a kernel, the device program counter wrapped its address space."
248 " This leaves the process in an inconsistent state and any further CUDA work"
249 " will return the same error. To continue using CUDA, the process must be terminated"
250 " and relaunched."
251 ),
252 719: (
253 "An exception occurred on the device while executing a kernel. Common"
254 " causes include dereferencing an invalid device pointer and accessing"
255 " out of bounds shared memory. Less common cases can be system specific - more"
256 " information about these cases can be found in the system specific user guide."
257 " This leaves the process in an inconsistent state and any further CUDA work"
258 " will return the same error. To continue using CUDA, the process must be terminated"
259 " and relaunched."
260 ),
261 720: (
262 "This error indicates that the number of blocks launched per grid for a kernel that was"
263 " launched via either ::cuLaunchCooperativeKernel or ::cuLaunchCooperativeKernelMultiDevice"
264 " exceeds the maximum number of blocks as allowed by ::cuOccupancyMaxActiveBlocksPerMultiprocessor"
265 " or ::cuOccupancyMaxActiveBlocksPerMultiprocessorWithFlags times the number of multiprocessors"
266 " as specified by the device attribute ::CU_DEVICE_ATTRIBUTE_MULTIPROCESSOR_COUNT."
267 ),
268 721: (
269 "An exception occurred on the device while exiting a kernel using tensor memory: the"
270 " tensor memory was not completely deallocated. This leaves the process in an inconsistent"
271 " state and any further CUDA work will return the same error. To continue using CUDA, the"
272 " process must be terminated and relaunched."
273 ),
274 800: "This error indicates that the attempted operation is not permitted.",
275 801: ("This error indicates that the attempted operation is not supported on the current system or device."),
276 802: (
277 "This error indicates that the system is not yet ready to start any CUDA"
278 " work. To continue using CUDA, verify the system configuration is in a"
279 " valid state and all required driver daemons are actively running."
280 " More information about this error can be found in the system specific"
281 " user guide."
282 ),
283 803: (
284 "This error indicates that there is a mismatch between the versions of"
285 " the display driver and the CUDA driver. Refer to the compatibility documentation"
286 " for supported versions."
287 ),
288 804: (
289 "This error indicates that the system was upgraded to run with forward compatibility"
290 " but the visible hardware detected by CUDA does not support this configuration."
291 " Refer to the compatibility documentation for the supported hardware matrix or ensure"
292 " that only supported hardware is visible during initialization via the CUDA_VISIBLE_DEVICES"
293 " environment variable."
294 ),
295 805: "This error indicates that the MPS client failed to connect to the MPS control daemon or the MPS server.",
296 806: "This error indicates that the remote procedural call between the MPS server and the MPS client failed.",
297 807: (
298 "This error indicates that the MPS server is not ready to accept new MPS client requests."
299 " This error can be returned when the MPS server is in the process of recovering from a fatal failure."
300 ),
301 808: "This error indicates that the hardware resources required to create MPS client have been exhausted.",
302 809: "This error indicates the the hardware resources required to support device connections have been exhausted.",
303 810: "This error indicates that the MPS client has been terminated by the server. To continue using CUDA, the process must be terminated and relaunched.",
304 811: "This error indicates that the module is using CUDA Dynamic Parallelism, but the current configuration, like MPS, does not support it.",
305 812: "This error indicates that a module contains an unsupported interaction between different versions of CUDA Dynamic Parallelism.",
306 900: ("This error indicates that the operation is not permitted when the stream is capturing."),
307 901: (
308 "This error indicates that the current capture sequence on the stream"
309 " has been invalidated due to a previous error."
310 ),
311 902: (
312 "This error indicates that the operation would have resulted in a merge of two independent capture sequences."
313 ),
314 903: "This error indicates that the capture was not initiated in this stream.",
315 904: ("This error indicates that the capture sequence contains a fork that was not joined to the primary stream."),
316 905: (
317 "This error indicates that a dependency would have been created which"
318 " crosses the capture sequence boundary. Only implicit in-stream ordering"
319 " dependencies are allowed to cross the boundary."
320 ),
321 906: ("This error indicates a disallowed implicit dependency on a current capture sequence from cudaStreamLegacy."),
322 907: (
323 "This error indicates that the operation is not permitted on an event which"
324 " was last recorded in a capturing stream."
325 ),
326 908: (
327 "A stream capture sequence not initiated with the ::CU_STREAM_CAPTURE_MODE_RELAXED"
328 " argument to ::cuStreamBeginCapture was passed to ::cuStreamEndCapture in a"
329 " different thread."
330 ),
331 909: "This error indicates that the timeout specified for the wait operation has lapsed.",
332 910: (
333 "This error indicates that the graph update was not performed because it included"
334 " changes which violated constraints specific to instantiated graph update."
335 ),
336 911: (
337 "This indicates that an async error has occurred in a device outside of CUDA."
338 " If CUDA was waiting for an external device's signal before consuming shared data,"
339 " the external device signaled an error indicating that the data is not valid for"
340 " consumption. This leaves the process in an inconsistent state and any further CUDA"
341 " work will return the same error. To continue using CUDA, the process must be"
342 " terminated and relaunched."
343 ),
344 912: "Indicates a kernel launch error due to cluster misconfiguration.",
345 913: ("Indiciates a function handle is not loaded when calling an API that requires a loaded function."),
346 914: ("This error indicates one or more resources passed in are not valid resource types for the operation."),
347 915: ("This error indicates one or more resources are insufficient or non-applicable for the operation."),
348 916: ("This error indicates that an error happened during the key rotation sequence."),
349 917: (
350 "This error indicates that the requested operation is not permitted because the"
351 " stream is in a detached state. This can occur if the green context associated"
352 " with the stream has been destroyed, limiting the stream's operational capabilities."
353 ),
354 999: "This indicates that an unknown internal error has occurred.",
355}