Coverage for cuda / core / experimental / _utils / runtime_cuda_error_explanations.py: 100%
1 statements
« prev ^ index » next coverage.py v7.13.0, created at 2025-12-10 01:19 +0000
« prev ^ index » next coverage.py v7.13.0, created at 2025-12-10 01:19 +0000
1# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2# SPDX-License-Identifier: LicenseRef-NVIDIA-SOFTWARE-LICENSE
4# To regenerate the dictionary below run:
5# ../../../../../toolshed/reformat_cuda_enums_as_py.py /usr/local/cuda/include/driver_types.h
6# Replace the dictionary below with the output.
7# Also update the CUDA Toolkit version number below.
9# ruff: noqa: E501
10# CUDA Toolkit v13.1.0
11RUNTIME_CUDA_ERROR_EXPLANATIONS = {
12 0: (
13 "The API call returned with no errors. In the case of query calls, this"
14 " also means that the operation being queried is complete (see"
15 " ::cudaEventQuery() and ::cudaStreamQuery())."
16 ),
17 1: (
18 "This indicates that one or more of the parameters passed to the API call"
19 " is not within an acceptable range of values."
20 ),
21 2: (
22 "The API call failed because it was unable to allocate enough memory or"
23 " other resources to perform the requested operation."
24 ),
25 3: ("The API call failed because the CUDA driver and runtime could not be initialized."),
26 4: (
27 "This indicates that a CUDA Runtime API call cannot be executed because"
28 " it is being called during process shut down, at a point in time after"
29 " CUDA driver has been unloaded."
30 ),
31 5: (
32 "This indicates profiler is not initialized for this run. This can"
33 " happen when the application is running with external profiling tools"
34 " like visual profiler."
35 ),
36 6: (
37 "This error return is deprecated as of CUDA 5.0. It is no longer an error"
38 " to attempt to enable/disable the profiling via ::cudaProfilerStart or"
39 " ::cudaProfilerStop without initialization."
40 ),
41 7: (
42 "This error return is deprecated as of CUDA 5.0. It is no longer an error"
43 " to call cudaProfilerStart() when profiling is already enabled."
44 ),
45 8: (
46 "This error return is deprecated as of CUDA 5.0. It is no longer an error"
47 " to call cudaProfilerStop() when profiling is already disabled."
48 ),
49 9: (
50 "This indicates that a kernel launch is requesting resources that can"
51 " never be satisfied by the current device. Requesting more shared memory"
52 " per block than the device supports will trigger this error, as will"
53 " requesting too many threads or blocks. See ::cudaDeviceProp for more"
54 " device limitations."
55 ),
56 12: (
57 "This indicates that one or more of the pitch-related parameters passed"
58 " to the API call is not within the acceptable range for pitch."
59 ),
60 13: ("This indicates that the symbol name/identifier passed to the API call is not a valid name or identifier."),
61 16: (
62 "This indicates that at least one host pointer passed to the API call is"
63 " not a valid host pointer."
64 " This error return is deprecated as of CUDA 10.1."
65 ),
66 17: (
67 "This indicates that at least one device pointer passed to the API call is"
68 " not a valid device pointer."
69 " This error return is deprecated as of CUDA 10.1."
70 ),
71 18: ("This indicates that the texture passed to the API call is not a valid texture."),
72 19: (
73 "This indicates that the texture binding is not valid. This occurs if you"
74 " call ::cudaGetTextureAlignmentOffset() with an unbound texture."
75 ),
76 20: (
77 "This indicates that the channel descriptor passed to the API call is not"
78 " valid. This occurs if the format is not one of the formats specified by"
79 " ::cudaChannelFormatKind, or if one of the dimensions is invalid."
80 ),
81 21: (
82 "This indicates that the direction of the memcpy passed to the API call is"
83 " not one of the types specified by ::cudaMemcpyKind."
84 ),
85 22: (
86 "This indicated that the user has taken the address of a constant variable,"
87 " which was forbidden up until the CUDA 3.1 release."
88 " This error return is deprecated as of CUDA 3.1. Variables in constant"
89 " memory may now have their address taken by the runtime via"
90 " ::cudaGetSymbolAddress()."
91 ),
92 23: (
93 "This indicated that a texture fetch was not able to be performed."
94 " This was previously used for device emulation of texture operations."
95 " This error return is deprecated as of CUDA 3.1. Device emulation mode was"
96 " removed with the CUDA 3.1 release."
97 ),
98 24: (
99 "This indicated that a texture was not bound for access."
100 " This was previously used for device emulation of texture operations."
101 " This error return is deprecated as of CUDA 3.1. Device emulation mode was"
102 " removed with the CUDA 3.1 release."
103 ),
104 25: (
105 "This indicated that a synchronization operation had failed."
106 " This was previously used for some device emulation functions."
107 " This error return is deprecated as of CUDA 3.1. Device emulation mode was"
108 " removed with the CUDA 3.1 release."
109 ),
110 26: (
111 "This indicates that a non-float texture was being accessed with linear"
112 " filtering. This is not supported by CUDA."
113 ),
114 27: (
115 "This indicates that an attempt was made to read an unsupported data type as a"
116 " normalized float. This is not supported by CUDA."
117 ),
118 28: (
119 "Mixing of device and device emulation code was not allowed."
120 " This error return is deprecated as of CUDA 3.1. Device emulation mode was"
121 " removed with the CUDA 3.1 release."
122 ),
123 31: (
124 "This indicates that the API call is not yet implemented. Production"
125 " releases of CUDA will never return this error."
126 " This error return is deprecated as of CUDA 4.1."
127 ),
128 32: (
129 "This indicated that an emulated device pointer exceeded the 32-bit address"
130 " range."
131 " This error return is deprecated as of CUDA 3.1. Device emulation mode was"
132 " removed with the CUDA 3.1 release."
133 ),
134 34: (
135 "This indicates that the CUDA driver that the application has loaded is a"
136 " stub library. Applications that run with the stub rather than a real"
137 " driver loaded will result in CUDA API returning this error."
138 ),
139 35: (
140 "This indicates that the installed NVIDIA CUDA driver is older than the"
141 " CUDA runtime library. This is not a supported configuration. Users should"
142 " install an updated NVIDIA display driver to allow the application to run."
143 ),
144 36: (
145 "This indicates that the API call requires a newer CUDA driver than the one"
146 " currently installed. Users should install an updated NVIDIA CUDA driver"
147 " to allow the API call to succeed."
148 ),
149 37: ("This indicates that the surface passed to the API call is not a valid surface."),
150 43: (
151 "This indicates that multiple global or constant variables (across separate"
152 " CUDA source files in the application) share the same string name."
153 ),
154 44: (
155 "This indicates that multiple textures (across separate CUDA source"
156 " files in the application) share the same string name."
157 ),
158 45: (
159 "This indicates that multiple surfaces (across separate CUDA source"
160 " files in the application) share the same string name."
161 ),
162 46: (
163 "This indicates that all CUDA devices are busy or unavailable at the current"
164 " time. Devices are often busy/unavailable due to use of"
165 " ::cudaComputeModeProhibited, ::cudaComputeModeExclusiveProcess, or when long"
166 " running CUDA kernels have filled up the GPU and are blocking new work"
167 " from starting. They can also be unavailable due to memory constraints"
168 " on a device that already has active CUDA work being performed."
169 ),
170 49: (
171 "This indicates that the current context is not compatible with this"
172 " the CUDA Runtime. This can only occur if you are using CUDA"
173 " Runtime/Driver interoperability and have created an existing Driver"
174 " context using the driver API. The Driver context may be incompatible"
175 " either because the Driver context was created using an older version"
176 " of the API, because the Runtime API call expects a primary driver"
177 " context and the Driver context is not primary, or because the Driver"
178 ' context has been destroyed. Please see CUDART_DRIVER "Interactions'
179 ' with the CUDA Driver API" for more information.'
180 ),
181 52: (
182 "The device function being invoked (usually via ::cudaLaunchKernel()) was not"
183 " previously configured via the ::cudaConfigureCall() function."
184 ),
185 53: (
186 "This indicated that a previous kernel launch failed. This was previously"
187 " used for device emulation of kernel launches."
188 " This error return is deprecated as of CUDA 3.1. Device emulation mode was"
189 " removed with the CUDA 3.1 release."
190 ),
191 65: (
192 "This error indicates that a device runtime grid launch did not occur"
193 " because the depth of the child grid would exceed the maximum supported"
194 " number of nested grid launches."
195 ),
196 66: (
197 "This error indicates that a grid launch did not occur because the kernel"
198 " uses file-scoped textures which are unsupported by the device runtime."
199 " Kernels launched via the device runtime only support textures created with"
200 " the Texture Object API's."
201 ),
202 67: (
203 "This error indicates that a grid launch did not occur because the kernel"
204 " uses file-scoped surfaces which are unsupported by the device runtime."
205 " Kernels launched via the device runtime only support surfaces created with"
206 " the Surface Object API's."
207 ),
208 68: (
209 "This error indicates that a call to ::cudaDeviceSynchronize made from"
210 " the device runtime failed because the call was made at grid depth greater"
211 " than than either the default (2 levels of grids) or user specified device"
212 " limit ::cudaLimitDevRuntimeSyncDepth. To be able to synchronize on"
213 " launched grids at a greater depth successfully, the maximum nested"
214 " depth at which ::cudaDeviceSynchronize will be called must be specified"
215 " with the ::cudaLimitDevRuntimeSyncDepth limit to the ::cudaDeviceSetLimit"
216 " api before the host-side launch of a kernel using the device runtime."
217 " Keep in mind that additional levels of sync depth require the runtime"
218 " to reserve large amounts of device memory that cannot be used for"
219 " user allocations. Note that ::cudaDeviceSynchronize made from device"
220 " runtime is only supported on devices of compute capability < 9.0."
221 ),
222 69: (
223 "This error indicates that a device runtime grid launch failed because"
224 " the launch would exceed the limit ::cudaLimitDevRuntimePendingLaunchCount."
225 " For this launch to proceed successfully, ::cudaDeviceSetLimit must be"
226 " called to set the ::cudaLimitDevRuntimePendingLaunchCount to be higher"
227 " than the upper bound of outstanding launches that can be issued to the"
228 " device runtime. Keep in mind that raising the limit of pending device"
229 " runtime launches will require the runtime to reserve device memory that"
230 " cannot be used for user allocations."
231 ),
232 98: ("The requested device function does not exist or is not compiled for the proper device architecture."),
233 100: ("This indicates that no CUDA-capable devices were detected by the installed CUDA driver."),
234 101: (
235 "This indicates that the device ordinal supplied by the user does not"
236 " correspond to a valid CUDA device or that the action requested is"
237 " invalid for the specified device."
238 ),
239 102: "This indicates that the device doesn't have a valid Grid License.",
240 103: (
241 "By default, the CUDA runtime may perform a minimal set of self-tests,"
242 " as well as CUDA driver tests, to establish the validity of both."
243 " Introduced in CUDA 11.2, this error return indicates that at least one"
244 " of these tests has failed and the validity of either the runtime"
245 " or the driver could not be established."
246 ),
247 127: "This indicates an internal startup failure in the CUDA runtime.",
248 200: "This indicates that the device kernel image is invalid.",
249 201: (
250 "This most frequently indicates that there is no context bound to the"
251 " current thread. This can also be returned if the context passed to an"
252 " API call is not a valid handle (such as a context that has had"
253 " ::cuCtxDestroy() invoked on it). This can also be returned if a user"
254 " mixes different API versions (i.e. 3010 context with 3020 API calls)."
255 " See ::cuCtxGetApiVersion() for more details."
256 ),
257 205: "This indicates that the buffer object could not be mapped.",
258 206: "This indicates that the buffer object could not be unmapped.",
259 207: ("This indicates that the specified array is currently mapped and thus cannot be destroyed."),
260 208: "This indicates that the resource is already mapped.",
261 209: (
262 "This indicates that there is no kernel image available that is suitable"
263 " for the device. This can occur when a user specifies code generation"
264 " options for a particular CUDA source file that do not include the"
265 " corresponding device configuration."
266 ),
267 210: "This indicates that a resource has already been acquired.",
268 211: "This indicates that a resource is not mapped.",
269 212: ("This indicates that a mapped resource is not available for access as an array."),
270 213: ("This indicates that a mapped resource is not available for access as a pointer."),
271 214: ("This indicates that an uncorrectable ECC error was detected during execution."),
272 215: ("This indicates that the ::cudaLimit passed to the API call is not supported by the active device."),
273 216: (
274 "This indicates that a call tried to access an exclusive-thread device that"
275 " is already in use by a different thread."
276 ),
277 217: ("This error indicates that P2P access is not supported across the given devices."),
278 218: (
279 "A PTX compilation failed. The runtime may fall back to compiling PTX if"
280 " an application does not contain a suitable binary for the current device."
281 ),
282 219: "This indicates an error with the OpenGL or DirectX context.",
283 220: ("This indicates that an uncorrectable NVLink error was detected during the execution."),
284 221: (
285 "This indicates that the PTX JIT compiler library was not found. The JIT Compiler"
286 " library is used for PTX compilation. The runtime may fall back to compiling PTX"
287 " if an application does not contain a suitable binary for the current device."
288 ),
289 222: (
290 "This indicates that the provided PTX was compiled with an unsupported toolchain."
291 " The most common reason for this, is the PTX was generated by a compiler newer"
292 " than what is supported by the CUDA driver and PTX JIT compiler."
293 ),
294 223: (
295 "This indicates that the JIT compilation was disabled. The JIT compilation compiles"
296 " PTX. The runtime may fall back to compiling PTX if an application does not contain"
297 " a suitable binary for the current device."
298 ),
299 224: "This indicates that the provided execution affinity is not supported by the device.",
300 225: (
301 "This indicates that the code to be compiled by the PTX JIT contains unsupported call to cudaDeviceSynchronize."
302 ),
303 226: (
304 "This indicates that an exception occurred on the device that is now"
305 " contained by the GPU's error containment capability. Common causes are -"
306 " a. Certain types of invalid accesses of peer GPU memory over nvlink"
307 " b. Certain classes of hardware errors"
308 " This leaves the process in an inconsistent state and any further CUDA"
309 " work will return the same error. To continue using CUDA, the process must"
310 " be terminated and relaunched."
311 ),
312 300: "This indicates that the device kernel source is invalid.",
313 301: "This indicates that the file specified was not found.",
314 302: "This indicates that a link to a shared object failed to resolve.",
315 303: "This indicates that initialization of a shared object failed.",
316 304: "This error indicates that an OS call failed.",
317 400: (
318 "This indicates that a resource handle passed to the API call was not"
319 " valid. Resource handles are opaque types like ::cudaStream_t and"
320 " ::cudaEvent_t."
321 ),
322 401: (
323 "This indicates that a resource required by the API call is not in a"
324 " valid state to perform the requested operation."
325 ),
326 402: (
327 "This indicates an attempt was made to introspect an object in a way that"
328 " would discard semantically important information. This is either due to"
329 " the object using funtionality newer than the API version used to"
330 " introspect it or omission of optional return arguments."
331 ),
332 500: (
333 "This indicates that a named symbol was not found. Examples of symbols"
334 " are global/constant variable names, driver function names, texture names,"
335 " and surface names."
336 ),
337 600: (
338 "This indicates that asynchronous operations issued previously have not"
339 " completed yet. This result is not actually an error, but must be indicated"
340 " differently than ::cudaSuccess (which indicates completion). Calls that"
341 " may return this value include ::cudaEventQuery() and ::cudaStreamQuery()."
342 ),
343 700: (
344 "The device encountered a load or store instruction on an invalid memory address."
345 " This leaves the process in an inconsistent state and any further CUDA work"
346 " will return the same error. To continue using CUDA, the process must be terminated"
347 " and relaunched."
348 ),
349 701: (
350 "This indicates that a launch did not occur because it did not have"
351 " appropriate resources. Although this error is similar to"
352 " ::cudaErrorInvalidConfiguration, this error usually indicates that the"
353 " user has attempted to pass too many arguments to the device kernel, or the"
354 " kernel launch specifies too many threads for the kernel's register count."
355 ),
356 702: (
357 "This indicates that the device kernel took too long to execute. This can"
358 " only occur if timeouts are enabled - see the device attribute"
359 ' ::cudaDeviceAttr::cudaDevAttrKernelExecTimeout "cudaDevAttrKernelExecTimeout"'
360 " for more information."
361 " This leaves the process in an inconsistent state and any further CUDA work"
362 " will return the same error. To continue using CUDA, the process must be terminated"
363 " and relaunched."
364 ),
365 703: ("This error indicates a kernel launch that uses an incompatible texturing mode."),
366 704: (
367 "This error indicates that a call to ::cudaDeviceEnablePeerAccess() is"
368 " trying to re-enable peer addressing on from a context which has already"
369 " had peer addressing enabled."
370 ),
371 705: (
372 "This error indicates that ::cudaDeviceDisablePeerAccess() is trying to"
373 " disable peer addressing which has not been enabled yet via"
374 " ::cudaDeviceEnablePeerAccess()."
375 ),
376 708: (
377 "This indicates that the user has called ::cudaSetValidDevices(),"
378 " ::cudaSetDeviceFlags(), ::cudaD3D9SetDirect3DDevice(),"
379 " ::cudaD3D10SetDirect3DDevice, ::cudaD3D11SetDirect3DDevice(), or"
380 " ::cudaVDPAUSetVDPAUDevice() after initializing the CUDA runtime by"
381 " calling non-device management operations (allocating memory and"
382 " launching kernels are examples of non-device management operations)."
383 " This error can also be returned if using runtime/driver"
384 " interoperability and there is an existing ::CUcontext active on the"
385 " host thread."
386 ),
387 709: (
388 "This error indicates that the context current to the calling thread"
389 " has been destroyed using ::cuCtxDestroy, or is a primary context which"
390 " has not yet been initialized."
391 ),
392 710: (
393 "An assert triggered in device code during kernel execution. The device"
394 " cannot be used again. All existing allocations are invalid. To continue"
395 " using CUDA, the process must be terminated and relaunched."
396 ),
397 711: (
398 "This error indicates that the hardware resources required to enable"
399 " peer access have been exhausted for one or more of the devices"
400 " passed to ::cudaEnablePeerAccess()."
401 ),
402 712: ("This error indicates that the memory range passed to ::cudaHostRegister() has already been registered."),
403 713: (
404 "This error indicates that the pointer passed to ::cudaHostUnregister()"
405 " does not correspond to any currently registered memory region."
406 ),
407 714: (
408 "Device encountered an error in the call stack during kernel execution,"
409 " possibly due to stack corruption or exceeding the stack size limit."
410 " This leaves the process in an inconsistent state and any further CUDA work"
411 " will return the same error. To continue using CUDA, the process must be terminated"
412 " and relaunched."
413 ),
414 715: (
415 "The device encountered an illegal instruction during kernel execution"
416 " This leaves the process in an inconsistent state and any further CUDA work"
417 " will return the same error. To continue using CUDA, the process must be terminated"
418 " and relaunched."
419 ),
420 716: (
421 "The device encountered a load or store instruction"
422 " on a memory address which is not aligned."
423 " This leaves the process in an inconsistent state and any further CUDA work"
424 " will return the same error. To continue using CUDA, the process must be terminated"
425 " and relaunched."
426 ),
427 717: (
428 "While executing a kernel, the device encountered an instruction"
429 " which can only operate on memory locations in certain address spaces"
430 " (global, shared, or local), but was supplied a memory address not"
431 " belonging to an allowed address space."
432 " This leaves the process in an inconsistent state and any further CUDA work"
433 " will return the same error. To continue using CUDA, the process must be terminated"
434 " and relaunched."
435 ),
436 718: (
437 "The device encountered an invalid program counter."
438 " This leaves the process in an inconsistent state and any further CUDA work"
439 " will return the same error. To continue using CUDA, the process must be terminated"
440 " and relaunched."
441 ),
442 719: (
443 "An exception occurred on the device while executing a kernel. Common"
444 " causes include dereferencing an invalid device pointer and accessing"
445 " out of bounds shared memory. Less common cases can be system specific - more"
446 " information about these cases can be found in the system specific user guide."
447 " This leaves the process in an inconsistent state and any further CUDA work"
448 " will return the same error. To continue using CUDA, the process must be terminated"
449 " and relaunched."
450 ),
451 720: (
452 "This error indicates that the number of blocks launched per grid for a kernel that was"
453 " launched via either ::cudaLaunchCooperativeKernel"
454 " exceeds the maximum number of blocks as allowed by ::cudaOccupancyMaxActiveBlocksPerMultiprocessor"
455 " or ::cudaOccupancyMaxActiveBlocksPerMultiprocessorWithFlags times the number of multiprocessors"
456 " as specified by the device attribute ::cudaDevAttrMultiProcessorCount."
457 ),
458 721: (
459 "An exception occurred on the device while exiting a kernel using tensor memory: the"
460 " tensor memory was not completely deallocated. This leaves the process in an inconsistent"
461 " state and any further CUDA work will return the same error. To continue using CUDA, the"
462 " process must be terminated and relaunched."
463 ),
464 800: "This error indicates the attempted operation is not permitted.",
465 801: ("This error indicates the attempted operation is not supported on the current system or device."),
466 802: (
467 "This error indicates that the system is not yet ready to start any CUDA"
468 " work. To continue using CUDA, verify the system configuration is in a"
469 " valid state and all required driver daemons are actively running."
470 " More information about this error can be found in the system specific"
471 " user guide."
472 ),
473 803: (
474 "This error indicates that there is a mismatch between the versions of"
475 " the display driver and the CUDA driver. Refer to the compatibility documentation"
476 " for supported versions."
477 ),
478 804: (
479 "This error indicates that the system was upgraded to run with forward compatibility"
480 " but the visible hardware detected by CUDA does not support this configuration."
481 " Refer to the compatibility documentation for the supported hardware matrix or ensure"
482 " that only supported hardware is visible during initialization via the CUDA_VISIBLE_DEVICES"
483 " environment variable."
484 ),
485 805: "This error indicates that the MPS client failed to connect to the MPS control daemon or the MPS server.",
486 806: "This error indicates that the remote procedural call between the MPS server and the MPS client failed.",
487 807: (
488 "This error indicates that the MPS server is not ready to accept new MPS client requests."
489 " This error can be returned when the MPS server is in the process of recovering from a fatal failure."
490 ),
491 808: "This error indicates that the hardware resources required to create MPS client have been exhausted.",
492 809: "This error indicates the the hardware resources required to device connections have been exhausted.",
493 810: "This error indicates that the MPS client has been terminated by the server. To continue using CUDA, the process must be terminated and relaunched.",
494 811: "This error indicates, that the program is using CUDA Dynamic Parallelism, but the current configuration, like MPS, does not support it.",
495 812: "This error indicates, that the program contains an unsupported interaction between different versions of CUDA Dynamic Parallelism.",
496 900: "The operation is not permitted when the stream is capturing.",
497 901: ("The current capture sequence on the stream has been invalidated due to a previous error."),
498 902: ("The operation would have resulted in a merge of two independent capture sequences."),
499 903: "The capture was not initiated in this stream.",
500 904: ("The capture sequence contains a fork that was not joined to the primary stream."),
501 905: (
502 "A dependency would have been created which crosses the capture sequence"
503 " boundary. Only implicit in-stream ordering dependencies are allowed to"
504 " cross the boundary."
505 ),
506 906: (
507 "The operation would have resulted in a disallowed implicit dependency on"
508 " a current capture sequence from cudaStreamLegacy."
509 ),
510 907: ("The operation is not permitted on an event which was last recorded in a capturing stream."),
511 908: (
512 "A stream capture sequence not initiated with the ::cudaStreamCaptureModeRelaxed"
513 " argument to ::cudaStreamBeginCapture was passed to ::cudaStreamEndCapture in a"
514 " different thread."
515 ),
516 909: "This indicates that the wait operation has timed out.",
517 910: (
518 "This error indicates that the graph update was not performed because it included"
519 " changes which violated constraints specific to instantiated graph update."
520 ),
521 911: (
522 "This indicates that an async error has occurred in a device outside of CUDA."
523 " If CUDA was waiting for an external device's signal before consuming shared data,"
524 " the external device signaled an error indicating that the data is not valid for"
525 " consumption. This leaves the process in an inconsistent state and any further CUDA"
526 " work will return the same error. To continue using CUDA, the process must be"
527 " terminated and relaunched."
528 ),
529 912: ("This indicates that a kernel launch error has occurred due to cluster misconfiguration."),
530 913: ("Indiciates a function handle is not loaded when calling an API that requires a loaded function."),
531 914: ("This error indicates one or more resources passed in are not valid resource types for the operation."),
532 915: ("This error indicates one or more resources are insufficient or non-applicable for the operation."),
533 917: (
534 "This error indicates that the requested operation is not permitted because the"
535 " stream is in a detached state. This can occur if the green context associated"
536 " with the stream has been destroyed, limiting the stream's operational capabilities."
537 ),
538 999: "This indicates that an unknown internal error has occurred.",
539 10000: (
540 "Any unhandled CUDA driver error is added to this value and returned via"
541 " the runtime. Production releases of CUDA should not return such errors."
542 " This error return is deprecated as of CUDA 4.1."
543 ),
544}