Coverage for cuda / core / _utils / driver_cu_result_explanations_frozen.py: 0.00%
1 statements
« prev ^ index » next coverage.py v7.13.5, created at 2026-04-29 01:27 +0000
« prev ^ index » next coverage.py v7.13.5, created at 2026-04-29 01:27 +0000
1# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2# SPDX-License-Identifier: Apache-2.0
4# CUDA Toolkit v13.1.1
5_FALLBACK_EXPLANATIONS = {
6 0: (
7 "The API call returned with no errors. In the case of query calls, this"
8 " also means that the operation being queried is complete (see"
9 " ::cuEventQuery() and ::cuStreamQuery())."
10 ),
11 1: (
12 "This indicates that one or more of the parameters passed to the API call"
13 " is not within an acceptable range of values."
14 ),
15 2: (
16 "The API call failed because it was unable to allocate enough memory or"
17 " other resources to perform the requested operation."
18 ),
19 3: (
20 "This indicates that the CUDA driver has not been initialized with"
21 " ::cuInit() or that initialization has failed."
22 ),
23 4: "This indicates that the CUDA driver is in the process of shutting down.",
24 5: (
25 "This indicates profiler is not initialized for this run. This can"
26 " happen when the application is running with external profiling tools"
27 " like visual profiler."
28 ),
29 6: (
30 "This error return is deprecated as of CUDA 5.0. It is no longer an error"
31 " to attempt to enable/disable the profiling via ::cuProfilerStart or"
32 " ::cuProfilerStop without initialization."
33 ),
34 7: (
35 "This error return is deprecated as of CUDA 5.0. It is no longer an error"
36 " to call cuProfilerStart() when profiling is already enabled."
37 ),
38 8: (
39 "This error return is deprecated as of CUDA 5.0. It is no longer an error"
40 " to call cuProfilerStop() when profiling is already disabled."
41 ),
42 34: (
43 "This indicates that the CUDA driver that the application has loaded is a"
44 " stub library. Applications that run with the stub rather than a real"
45 " driver loaded will result in CUDA API returning this error."
46 ),
47 36: (
48 "This indicates that the API call requires a newer CUDA driver than the one"
49 " currently installed. Users should install an updated NVIDIA CUDA driver"
50 " to allow the API call to succeed."
51 ),
52 46: (
53 "This indicates that requested CUDA device is unavailable at the current"
54 " time. Devices are often unavailable due to use of"
55 " ::CU_COMPUTEMODE_EXCLUSIVE_PROCESS or ::CU_COMPUTEMODE_PROHIBITED."
56 ),
57 100: ("This indicates that no CUDA-capable devices were detected by the installed CUDA driver."),
58 101: (
59 "This indicates that the device ordinal supplied by the user does not"
60 " correspond to a valid CUDA device or that the action requested is"
61 " invalid for the specified device."
62 ),
63 102: "This error indicates that the Grid license is not applied.",
64 200: ("This indicates that the device kernel image is invalid. This can also indicate an invalid CUDA module."),
65 201: (
66 "This most frequently indicates that there is no context bound to the"
67 " current thread. This can also be returned if the context passed to an"
68 " API call is not a valid handle (such as a context that has had"
69 " ::cuCtxDestroy() invoked on it). This can also be returned if a user"
70 " mixes different API versions (i.e. 3010 context with 3020 API calls)."
71 " See ::cuCtxGetApiVersion() for more details."
72 " This can also be returned if the green context passed to an API call"
73 " was not converted to a ::CUcontext using ::cuCtxFromGreenCtx API."
74 ),
75 202: (
76 "This indicated that the context being supplied as a parameter to the"
77 " API call was already the active context."
78 " This error return is deprecated as of CUDA 3.2. It is no longer an"
79 " error to attempt to push the active context via ::cuCtxPushCurrent()."
80 ),
81 205: "This indicates that a map or register operation has failed.",
82 206: "This indicates that an unmap or unregister operation has failed.",
83 207: ("This indicates that the specified array is currently mapped and thus cannot be destroyed."),
84 208: "This indicates that the resource is already mapped.",
85 209: (
86 "This indicates that there is no kernel image available that is suitable"
87 " for the device. This can occur when a user specifies code generation"
88 " options for a particular CUDA source file that do not include the"
89 " corresponding device configuration."
90 ),
91 210: "This indicates that a resource has already been acquired.",
92 211: "This indicates that a resource is not mapped.",
93 212: ("This indicates that a mapped resource is not available for access as an array."),
94 213: ("This indicates that a mapped resource is not available for access as a pointer."),
95 214: ("This indicates that an uncorrectable ECC error was detected during execution."),
96 215: ("This indicates that the ::CUlimit passed to the API call is not supported by the active device."),
97 216: (
98 "This indicates that the ::CUcontext passed to the API call can"
99 " only be bound to a single CPU thread at a time but is already"
100 " bound to a CPU thread."
101 ),
102 217: ("This indicates that peer access is not supported across the given devices."),
103 218: "This indicates that a PTX JIT compilation failed.",
104 219: "This indicates an error with OpenGL or DirectX context.",
105 220: ("This indicates that an uncorrectable NVLink error was detected during the execution."),
106 221: "This indicates that the PTX JIT compiler library was not found.",
107 222: "This indicates that the provided PTX was compiled with an unsupported toolchain.",
108 223: "This indicates that the PTX JIT compilation was disabled.",
109 224: ("This indicates that the ::CUexecAffinityType passed to the API call is not supported by the active device."),
110 225: (
111 "This indicates that the code to be compiled by the PTX JIT contains unsupported call to cudaDeviceSynchronize."
112 ),
113 226: (
114 "This indicates that an exception occurred on the device that is now"
115 " contained by the GPU's error containment capability. Common causes are -"
116 " a. Certain types of invalid accesses of peer GPU memory over nvlink"
117 " b. Certain classes of hardware errors"
118 " This leaves the process in an inconsistent state and any further CUDA"
119 " work will return the same error. To continue using CUDA, the process must"
120 " be terminated and relaunched."
121 ),
122 300: (
123 "This indicates that the device kernel source is invalid. This includes"
124 " compilation/linker errors encountered in device code or user error."
125 ),
126 301: "This indicates that the file specified was not found.",
127 302: "This indicates that a link to a shared object failed to resolve.",
128 303: "This indicates that initialization of a shared object failed.",
129 304: "This indicates that an OS call failed.",
130 400: (
131 "This indicates that a resource handle passed to the API call was not"
132 " valid. Resource handles are opaque types like ::CUstream and ::CUevent."
133 ),
134 401: (
135 "This indicates that a resource required by the API call is not in a"
136 " valid state to perform the requested operation."
137 ),
138 402: (
139 "This indicates an attempt was made to introspect an object in a way that"
140 " would discard semantically important information. This is either due to"
141 " the object using funtionality newer than the API version used to"
142 " introspect it or omission of optional return arguments."
143 ),
144 500: (
145 "This indicates that a named symbol was not found. Examples of symbols"
146 " are global/constant variable names, driver function names, texture names,"
147 " and surface names."
148 ),
149 600: (
150 "This indicates that asynchronous operations issued previously have not"
151 " completed yet. This result is not actually an error, but must be indicated"
152 " differently than ::CUDA_SUCCESS (which indicates completion). Calls that"
153 " may return this value include ::cuEventQuery() and ::cuStreamQuery()."
154 ),
155 700: (
156 "While executing a kernel, the device encountered a"
157 " load or store instruction on an invalid memory address."
158 " This leaves the process in an inconsistent state and any further CUDA work"
159 " will return the same error. To continue using CUDA, the process must be terminated"
160 " and relaunched."
161 ),
162 701: (
163 "This indicates that a launch did not occur because it did not have"
164 " appropriate resources. This error usually indicates that the user has"
165 " attempted to pass too many arguments to the device kernel, or the"
166 " kernel launch specifies too many threads for the kernel's register"
167 " count. Passing arguments of the wrong size (i.e. a 64-bit pointer"
168 " when a 32-bit int is expected) is equivalent to passing too many"
169 " arguments and can also result in this error."
170 ),
171 702: (
172 "This indicates that the device kernel took too long to execute. This can"
173 " only occur if timeouts are enabled - see the device attribute"
174 " ::CU_DEVICE_ATTRIBUTE_KERNEL_EXEC_TIMEOUT for more information."
175 " This leaves the process in an inconsistent state and any further CUDA work"
176 " will return the same error. To continue using CUDA, the process must be terminated"
177 " and relaunched."
178 ),
179 703: ("This error indicates a kernel launch that uses an incompatible texturing mode."),
180 704: (
181 "This error indicates that a call to ::cuCtxEnablePeerAccess() is"
182 " trying to re-enable peer access to a context which has already"
183 " had peer access to it enabled."
184 ),
185 705: (
186 "This error indicates that ::cuCtxDisablePeerAccess() is"
187 " trying to disable peer access which has not been enabled yet"
188 " via ::cuCtxEnablePeerAccess()."
189 ),
190 708: ("This error indicates that the primary context for the specified device has already been initialized."),
191 709: (
192 "This error indicates that the context current to the calling thread"
193 " has been destroyed using ::cuCtxDestroy, or is a primary context which"
194 " has not yet been initialized."
195 ),
196 710: (
197 "A device-side assert triggered during kernel execution. The context"
198 " cannot be used anymore, and must be destroyed. All existing device"
199 " memory allocations from this context are invalid and must be"
200 " reconstructed if the program is to continue using CUDA."
201 ),
202 711: (
203 "This error indicates that the hardware resources required to enable"
204 " peer access have been exhausted for one or more of the devices"
205 " passed to ::cuCtxEnablePeerAccess()."
206 ),
207 712: ("This error indicates that the memory range passed to ::cuMemHostRegister() has already been registered."),
208 713: (
209 "This error indicates that the pointer passed to ::cuMemHostUnregister()"
210 " does not correspond to any currently registered memory region."
211 ),
212 714: (
213 "While executing a kernel, the device encountered a stack error."
214 " This can be due to stack corruption or exceeding the stack size limit."
215 " This leaves the process in an inconsistent state and any further CUDA work"
216 " will return the same error. To continue using CUDA, the process must be terminated"
217 " and relaunched."
218 ),
219 715: (
220 "While executing a kernel, the device encountered an illegal instruction."
221 " This leaves the process in an inconsistent state and any further CUDA work"
222 " will return the same error. To continue using CUDA, the process must be terminated"
223 " and relaunched."
224 ),
225 716: (
226 "While executing a kernel, the device encountered a load or store instruction"
227 " on a memory address which is not aligned."
228 " This leaves the process in an inconsistent state and any further CUDA work"
229 " will return the same error. To continue using CUDA, the process must be terminated"
230 " and relaunched."
231 ),
232 717: (
233 "While executing a kernel, the device encountered an instruction"
234 " which can only operate on memory locations in certain address spaces"
235 " (global, shared, or local), but was supplied a memory address not"
236 " belonging to an allowed address space."
237 " This leaves the process in an inconsistent state and any further CUDA work"
238 " will return the same error. To continue using CUDA, the process must be terminated"
239 " and relaunched."
240 ),
241 718: (
242 "While executing a kernel, the device program counter wrapped its address space."
243 " This leaves the process in an inconsistent state and any further CUDA work"
244 " will return the same error. To continue using CUDA, the process must be terminated"
245 " and relaunched."
246 ),
247 719: (
248 "An exception occurred on the device while executing a kernel. Common"
249 " causes include dereferencing an invalid device pointer and accessing"
250 " out of bounds shared memory. Less common cases can be system specific - more"
251 " information about these cases can be found in the system specific user guide."
252 " This leaves the process in an inconsistent state and any further CUDA work"
253 " will return the same error. To continue using CUDA, the process must be terminated"
254 " and relaunched."
255 ),
256 720: (
257 "This error indicates that the number of blocks launched per grid for a kernel that was"
258 " launched via either ::cuLaunchCooperativeKernel or ::cuLaunchCooperativeKernelMultiDevice"
259 " exceeds the maximum number of blocks as allowed by ::cuOccupancyMaxActiveBlocksPerMultiprocessor"
260 " or ::cuOccupancyMaxActiveBlocksPerMultiprocessorWithFlags times the number of multiprocessors"
261 " as specified by the device attribute ::CU_DEVICE_ATTRIBUTE_MULTIPROCESSOR_COUNT."
262 ),
263 721: (
264 "An exception occurred on the device while exiting a kernel using tensor memory: the"
265 " tensor memory was not completely deallocated. This leaves the process in an inconsistent"
266 " state and any further CUDA work will return the same error. To continue using CUDA, the"
267 " process must be terminated and relaunched."
268 ),
269 800: "This error indicates that the attempted operation is not permitted.",
270 801: ("This error indicates that the attempted operation is not supported on the current system or device."),
271 802: (
272 "This error indicates that the system is not yet ready to start any CUDA"
273 " work. To continue using CUDA, verify the system configuration is in a"
274 " valid state and all required driver daemons are actively running."
275 " More information about this error can be found in the system specific"
276 " user guide."
277 ),
278 803: (
279 "This error indicates that there is a mismatch between the versions of"
280 " the display driver and the CUDA driver. Refer to the compatibility documentation"
281 " for supported versions."
282 ),
283 804: (
284 "This error indicates that the system was upgraded to run with forward compatibility"
285 " but the visible hardware detected by CUDA does not support this configuration."
286 " Refer to the compatibility documentation for the supported hardware matrix or ensure"
287 " that only supported hardware is visible during initialization via the CUDA_VISIBLE_DEVICES"
288 " environment variable."
289 ),
290 805: "This error indicates that the MPS client failed to connect to the MPS control daemon or the MPS server.",
291 806: "This error indicates that the remote procedural call between the MPS server and the MPS client failed.",
292 807: (
293 "This error indicates that the MPS server is not ready to accept new MPS client requests."
294 " This error can be returned when the MPS server is in the process of recovering from a fatal failure."
295 ),
296 808: "This error indicates that the hardware resources required to create MPS client have been exhausted.",
297 809: "This error indicates the the hardware resources required to support device connections have been exhausted.",
298 810: "This error indicates that the MPS client has been terminated by the server. To continue using CUDA, the process must be terminated and relaunched.",
299 811: "This error indicates that the module is using CUDA Dynamic Parallelism, but the current configuration, like MPS, does not support it.",
300 812: "This error indicates that a module contains an unsupported interaction between different versions of CUDA Dynamic Parallelism.",
301 900: ("This error indicates that the operation is not permitted when the stream is capturing."),
302 901: (
303 "This error indicates that the current capture sequence on the stream"
304 " has been invalidated due to a previous error."
305 ),
306 902: (
307 "This error indicates that the operation would have resulted in a merge of two independent capture sequences."
308 ),
309 903: "This error indicates that the capture was not initiated in this stream.",
310 904: ("This error indicates that the capture sequence contains a fork that was not joined to the primary stream."),
311 905: (
312 "This error indicates that a dependency would have been created which"
313 " crosses the capture sequence boundary. Only implicit in-stream ordering"
314 " dependencies are allowed to cross the boundary."
315 ),
316 906: ("This error indicates a disallowed implicit dependency on a current capture sequence from cudaStreamLegacy."),
317 907: (
318 "This error indicates that the operation is not permitted on an event which"
319 " was last recorded in a capturing stream."
320 ),
321 908: (
322 "A stream capture sequence not initiated with the ::CU_STREAM_CAPTURE_MODE_RELAXED"
323 " argument to ::cuStreamBeginCapture was passed to ::cuStreamEndCapture in a"
324 " different thread."
325 ),
326 909: "This error indicates that the timeout specified for the wait operation has lapsed.",
327 910: (
328 "This error indicates that the graph update was not performed because it included"
329 " changes which violated constraints specific to instantiated graph update."
330 ),
331 911: (
332 "This indicates that an async error has occurred in a device outside of CUDA."
333 " If CUDA was waiting for an external device's signal before consuming shared data,"
334 " the external device signaled an error indicating that the data is not valid for"
335 " consumption. This leaves the process in an inconsistent state and any further CUDA"
336 " work will return the same error. To continue using CUDA, the process must be"
337 " terminated and relaunched."
338 ),
339 912: "Indicates a kernel launch error due to cluster misconfiguration.",
340 913: ("Indiciates a function handle is not loaded when calling an API that requires a loaded function."),
341 914: ("This error indicates one or more resources passed in are not valid resource types for the operation."),
342 915: ("This error indicates one or more resources are insufficient or non-applicable for the operation."),
343 916: ("This error indicates that an error happened during the key rotation sequence."),
344 917: (
345 "This error indicates that the requested operation is not permitted because the"
346 " stream is in a detached state. This can occur if the green context associated"
347 " with the stream has been destroyed, limiting the stream's operational capabilities."
348 ),
349 999: "This indicates that an unknown internal error has occurred.",
350}