Coverage for cuda / core / _utils / runtime_cuda_error_explanations_frozen.py: 0.00%

1 statements  

« prev     ^ index     » next       coverage.py v7.13.5, created at 2026-04-29 01:27 +0000

1# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. 

2# SPDX-License-Identifier: Apache-2.0 

3 

4# CUDA Toolkit v13.1.1 

5_FALLBACK_EXPLANATIONS = { 

6 0: ( 

7 "The API call returned with no errors. In the case of query calls, this" 

8 " also means that the operation being queried is complete (see" 

9 " ::cudaEventQuery() and ::cudaStreamQuery())." 

10 ), 

11 1: ( 

12 "This indicates that one or more of the parameters passed to the API call" 

13 " is not within an acceptable range of values." 

14 ), 

15 2: ( 

16 "The API call failed because it was unable to allocate enough memory or" 

17 " other resources to perform the requested operation." 

18 ), 

19 3: ("The API call failed because the CUDA driver and runtime could not be initialized."), 

20 4: ( 

21 "This indicates that a CUDA Runtime API call cannot be executed because" 

22 " it is being called during process shut down, at a point in time after" 

23 " CUDA driver has been unloaded." 

24 ), 

25 5: ( 

26 "This indicates profiler is not initialized for this run. This can" 

27 " happen when the application is running with external profiling tools" 

28 " like visual profiler." 

29 ), 

30 6: ( 

31 "This error return is deprecated as of CUDA 5.0. It is no longer an error" 

32 " to attempt to enable/disable the profiling via ::cudaProfilerStart or" 

33 " ::cudaProfilerStop without initialization." 

34 ), 

35 7: ( 

36 "This error return is deprecated as of CUDA 5.0. It is no longer an error" 

37 " to call cudaProfilerStart() when profiling is already enabled." 

38 ), 

39 8: ( 

40 "This error return is deprecated as of CUDA 5.0. It is no longer an error" 

41 " to call cudaProfilerStop() when profiling is already disabled." 

42 ), 

43 9: ( 

44 "This indicates that a kernel launch is requesting resources that can" 

45 " never be satisfied by the current device. Requesting more shared memory" 

46 " per block than the device supports will trigger this error, as will" 

47 " requesting too many threads or blocks. See ::cudaDeviceProp for more" 

48 " device limitations." 

49 ), 

50 12: ( 

51 "This indicates that one or more of the pitch-related parameters passed" 

52 " to the API call is not within the acceptable range for pitch." 

53 ), 

54 13: ("This indicates that the symbol name/identifier passed to the API call is not a valid name or identifier."), 

55 16: ( 

56 "This indicates that at least one host pointer passed to the API call is" 

57 " not a valid host pointer." 

58 " This error return is deprecated as of CUDA 10.1." 

59 ), 

60 17: ( 

61 "This indicates that at least one device pointer passed to the API call is" 

62 " not a valid device pointer." 

63 " This error return is deprecated as of CUDA 10.1." 

64 ), 

65 18: ("This indicates that the texture passed to the API call is not a valid texture."), 

66 19: ( 

67 "This indicates that the texture binding is not valid. This occurs if you" 

68 " call ::cudaGetTextureAlignmentOffset() with an unbound texture." 

69 ), 

70 20: ( 

71 "This indicates that the channel descriptor passed to the API call is not" 

72 " valid. This occurs if the format is not one of the formats specified by" 

73 " ::cudaChannelFormatKind, or if one of the dimensions is invalid." 

74 ), 

75 21: ( 

76 "This indicates that the direction of the memcpy passed to the API call is" 

77 " not one of the types specified by ::cudaMemcpyKind." 

78 ), 

79 22: ( 

80 "This indicated that the user has taken the address of a constant variable," 

81 " which was forbidden up until the CUDA 3.1 release." 

82 " This error return is deprecated as of CUDA 3.1. Variables in constant" 

83 " memory may now have their address taken by the runtime via" 

84 " ::cudaGetSymbolAddress()." 

85 ), 

86 23: ( 

87 "This indicated that a texture fetch was not able to be performed." 

88 " This was previously used for device emulation of texture operations." 

89 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

90 " removed with the CUDA 3.1 release." 

91 ), 

92 24: ( 

93 "This indicated that a texture was not bound for access." 

94 " This was previously used for device emulation of texture operations." 

95 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

96 " removed with the CUDA 3.1 release." 

97 ), 

98 25: ( 

99 "This indicated that a synchronization operation had failed." 

100 " This was previously used for some device emulation functions." 

101 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

102 " removed with the CUDA 3.1 release." 

103 ), 

104 26: ( 

105 "This indicates that a non-float texture was being accessed with linear" 

106 " filtering. This is not supported by CUDA." 

107 ), 

108 27: ( 

109 "This indicates that an attempt was made to read an unsupported data type as a" 

110 " normalized float. This is not supported by CUDA." 

111 ), 

112 28: ( 

113 "Mixing of device and device emulation code was not allowed." 

114 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

115 " removed with the CUDA 3.1 release." 

116 ), 

117 31: ( 

118 "This indicates that the API call is not yet implemented. Production" 

119 " releases of CUDA will never return this error." 

120 " This error return is deprecated as of CUDA 4.1." 

121 ), 

122 32: ( 

123 "This indicated that an emulated device pointer exceeded the 32-bit address" 

124 " range." 

125 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

126 " removed with the CUDA 3.1 release." 

127 ), 

128 34: ( 

129 "This indicates that the CUDA driver that the application has loaded is a" 

130 " stub library. Applications that run with the stub rather than a real" 

131 " driver loaded will result in CUDA API returning this error." 

132 ), 

133 35: ( 

134 "This indicates that the installed NVIDIA CUDA driver is older than the" 

135 " CUDA runtime library. This is not a supported configuration. Users should" 

136 " install an updated NVIDIA display driver to allow the application to run." 

137 ), 

138 36: ( 

139 "This indicates that the API call requires a newer CUDA driver than the one" 

140 " currently installed. Users should install an updated NVIDIA CUDA driver" 

141 " to allow the API call to succeed." 

142 ), 

143 37: ("This indicates that the surface passed to the API call is not a valid surface."), 

144 43: ( 

145 "This indicates that multiple global or constant variables (across separate" 

146 " CUDA source files in the application) share the same string name." 

147 ), 

148 44: ( 

149 "This indicates that multiple textures (across separate CUDA source" 

150 " files in the application) share the same string name." 

151 ), 

152 45: ( 

153 "This indicates that multiple surfaces (across separate CUDA source" 

154 " files in the application) share the same string name." 

155 ), 

156 46: ( 

157 "This indicates that all CUDA devices are busy or unavailable at the current" 

158 " time. Devices are often busy/unavailable due to use of" 

159 " ::cudaComputeModeProhibited, ::cudaComputeModeExclusiveProcess, or when long" 

160 " running CUDA kernels have filled up the GPU and are blocking new work" 

161 " from starting. They can also be unavailable due to memory constraints" 

162 " on a device that already has active CUDA work being performed." 

163 ), 

164 49: ( 

165 "This indicates that the current context is not compatible with this" 

166 " the CUDA Runtime. This can only occur if you are using CUDA" 

167 " Runtime/Driver interoperability and have created an existing Driver" 

168 " context using the driver API. The Driver context may be incompatible" 

169 " either because the Driver context was created using an older version" 

170 " of the API, because the Runtime API call expects a primary driver" 

171 " context and the Driver context is not primary, or because the Driver" 

172 ' context has been destroyed. Please see CUDART_DRIVER "Interactions' 

173 ' with the CUDA Driver API" for more information.' 

174 ), 

175 52: ( 

176 "The device function being invoked (usually via ::cudaLaunchKernel()) was not" 

177 " previously configured via the ::cudaConfigureCall() function." 

178 ), 

179 53: ( 

180 "This indicated that a previous kernel launch failed. This was previously" 

181 " used for device emulation of kernel launches." 

182 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

183 " removed with the CUDA 3.1 release." 

184 ), 

185 65: ( 

186 "This error indicates that a device runtime grid launch did not occur" 

187 " because the depth of the child grid would exceed the maximum supported" 

188 " number of nested grid launches." 

189 ), 

190 66: ( 

191 "This error indicates that a grid launch did not occur because the kernel" 

192 " uses file-scoped textures which are unsupported by the device runtime." 

193 " Kernels launched via the device runtime only support textures created with" 

194 " the Texture Object API's." 

195 ), 

196 67: ( 

197 "This error indicates that a grid launch did not occur because the kernel" 

198 " uses file-scoped surfaces which are unsupported by the device runtime." 

199 " Kernels launched via the device runtime only support surfaces created with" 

200 " the Surface Object API's." 

201 ), 

202 68: ( 

203 "This error indicates that a call to ::cudaDeviceSynchronize made from" 

204 " the device runtime failed because the call was made at grid depth greater" 

205 " than than either the default (2 levels of grids) or user specified device" 

206 " limit ::cudaLimitDevRuntimeSyncDepth. To be able to synchronize on" 

207 " launched grids at a greater depth successfully, the maximum nested" 

208 " depth at which ::cudaDeviceSynchronize will be called must be specified" 

209 " with the ::cudaLimitDevRuntimeSyncDepth limit to the ::cudaDeviceSetLimit" 

210 " api before the host-side launch of a kernel using the device runtime." 

211 " Keep in mind that additional levels of sync depth require the runtime" 

212 " to reserve large amounts of device memory that cannot be used for" 

213 " user allocations. Note that ::cudaDeviceSynchronize made from device" 

214 " runtime is only supported on devices of compute capability < 9.0." 

215 ), 

216 69: ( 

217 "This error indicates that a device runtime grid launch failed because" 

218 " the launch would exceed the limit ::cudaLimitDevRuntimePendingLaunchCount." 

219 " For this launch to proceed successfully, ::cudaDeviceSetLimit must be" 

220 " called to set the ::cudaLimitDevRuntimePendingLaunchCount to be higher" 

221 " than the upper bound of outstanding launches that can be issued to the" 

222 " device runtime. Keep in mind that raising the limit of pending device" 

223 " runtime launches will require the runtime to reserve device memory that" 

224 " cannot be used for user allocations." 

225 ), 

226 98: ("The requested device function does not exist or is not compiled for the proper device architecture."), 

227 100: ("This indicates that no CUDA-capable devices were detected by the installed CUDA driver."), 

228 101: ( 

229 "This indicates that the device ordinal supplied by the user does not" 

230 " correspond to a valid CUDA device or that the action requested is" 

231 " invalid for the specified device." 

232 ), 

233 102: "This indicates that the device doesn't have a valid Grid License.", 

234 103: ( 

235 "By default, the CUDA runtime may perform a minimal set of self-tests," 

236 " as well as CUDA driver tests, to establish the validity of both." 

237 " Introduced in CUDA 11.2, this error return indicates that at least one" 

238 " of these tests has failed and the validity of either the runtime" 

239 " or the driver could not be established." 

240 ), 

241 127: "This indicates an internal startup failure in the CUDA runtime.", 

242 200: "This indicates that the device kernel image is invalid.", 

243 201: ( 

244 "This most frequently indicates that there is no context bound to the" 

245 " current thread. This can also be returned if the context passed to an" 

246 " API call is not a valid handle (such as a context that has had" 

247 " ::cuCtxDestroy() invoked on it). This can also be returned if a user" 

248 " mixes different API versions (i.e. 3010 context with 3020 API calls)." 

249 " See ::cuCtxGetApiVersion() for more details." 

250 ), 

251 205: "This indicates that the buffer object could not be mapped.", 

252 206: "This indicates that the buffer object could not be unmapped.", 

253 207: ("This indicates that the specified array is currently mapped and thus cannot be destroyed."), 

254 208: "This indicates that the resource is already mapped.", 

255 209: ( 

256 "This indicates that there is no kernel image available that is suitable" 

257 " for the device. This can occur when a user specifies code generation" 

258 " options for a particular CUDA source file that do not include the" 

259 " corresponding device configuration." 

260 ), 

261 210: "This indicates that a resource has already been acquired.", 

262 211: "This indicates that a resource is not mapped.", 

263 212: ("This indicates that a mapped resource is not available for access as an array."), 

264 213: ("This indicates that a mapped resource is not available for access as a pointer."), 

265 214: ("This indicates that an uncorrectable ECC error was detected during execution."), 

266 215: ("This indicates that the ::cudaLimit passed to the API call is not supported by the active device."), 

267 216: ( 

268 "This indicates that a call tried to access an exclusive-thread device that" 

269 " is already in use by a different thread." 

270 ), 

271 217: ("This error indicates that P2P access is not supported across the given devices."), 

272 218: ( 

273 "A PTX compilation failed. The runtime may fall back to compiling PTX if" 

274 " an application does not contain a suitable binary for the current device." 

275 ), 

276 219: "This indicates an error with the OpenGL or DirectX context.", 

277 220: ("This indicates that an uncorrectable NVLink error was detected during the execution."), 

278 221: ( 

279 "This indicates that the PTX JIT compiler library was not found. The JIT Compiler" 

280 " library is used for PTX compilation. The runtime may fall back to compiling PTX" 

281 " if an application does not contain a suitable binary for the current device." 

282 ), 

283 222: ( 

284 "This indicates that the provided PTX was compiled with an unsupported toolchain." 

285 " The most common reason for this, is the PTX was generated by a compiler newer" 

286 " than what is supported by the CUDA driver and PTX JIT compiler." 

287 ), 

288 223: ( 

289 "This indicates that the JIT compilation was disabled. The JIT compilation compiles" 

290 " PTX. The runtime may fall back to compiling PTX if an application does not contain" 

291 " a suitable binary for the current device." 

292 ), 

293 224: "This indicates that the provided execution affinity is not supported by the device.", 

294 225: ( 

295 "This indicates that the code to be compiled by the PTX JIT contains unsupported call to cudaDeviceSynchronize." 

296 ), 

297 226: ( 

298 "This indicates that an exception occurred on the device that is now" 

299 " contained by the GPU's error containment capability. Common causes are -" 

300 " a. Certain types of invalid accesses of peer GPU memory over nvlink" 

301 " b. Certain classes of hardware errors" 

302 " This leaves the process in an inconsistent state and any further CUDA" 

303 " work will return the same error. To continue using CUDA, the process must" 

304 " be terminated and relaunched." 

305 ), 

306 300: "This indicates that the device kernel source is invalid.", 

307 301: "This indicates that the file specified was not found.", 

308 302: "This indicates that a link to a shared object failed to resolve.", 

309 303: "This indicates that initialization of a shared object failed.", 

310 304: "This error indicates that an OS call failed.", 

311 400: ( 

312 "This indicates that a resource handle passed to the API call was not" 

313 " valid. Resource handles are opaque types like ::cudaStream_t and" 

314 " ::cudaEvent_t." 

315 ), 

316 401: ( 

317 "This indicates that a resource required by the API call is not in a" 

318 " valid state to perform the requested operation." 

319 ), 

320 402: ( 

321 "This indicates an attempt was made to introspect an object in a way that" 

322 " would discard semantically important information. This is either due to" 

323 " the object using funtionality newer than the API version used to" 

324 " introspect it or omission of optional return arguments." 

325 ), 

326 500: ( 

327 "This indicates that a named symbol was not found. Examples of symbols" 

328 " are global/constant variable names, driver function names, texture names," 

329 " and surface names." 

330 ), 

331 600: ( 

332 "This indicates that asynchronous operations issued previously have not" 

333 " completed yet. This result is not actually an error, but must be indicated" 

334 " differently than ::cudaSuccess (which indicates completion). Calls that" 

335 " may return this value include ::cudaEventQuery() and ::cudaStreamQuery()." 

336 ), 

337 700: ( 

338 "The device encountered a load or store instruction on an invalid memory address." 

339 " This leaves the process in an inconsistent state and any further CUDA work" 

340 " will return the same error. To continue using CUDA, the process must be terminated" 

341 " and relaunched." 

342 ), 

343 701: ( 

344 "This indicates that a launch did not occur because it did not have" 

345 " appropriate resources. Although this error is similar to" 

346 " ::cudaErrorInvalidConfiguration, this error usually indicates that the" 

347 " user has attempted to pass too many arguments to the device kernel, or the" 

348 " kernel launch specifies too many threads for the kernel's register count." 

349 ), 

350 702: ( 

351 "This indicates that the device kernel took too long to execute. This can" 

352 " only occur if timeouts are enabled - see the device attribute" 

353 ' ::cudaDeviceAttr::cudaDevAttrKernelExecTimeout "cudaDevAttrKernelExecTimeout"' 

354 " for more information." 

355 " This leaves the process in an inconsistent state and any further CUDA work" 

356 " will return the same error. To continue using CUDA, the process must be terminated" 

357 " and relaunched." 

358 ), 

359 703: ("This error indicates a kernel launch that uses an incompatible texturing mode."), 

360 704: ( 

361 "This error indicates that a call to ::cudaDeviceEnablePeerAccess() is" 

362 " trying to re-enable peer addressing on from a context which has already" 

363 " had peer addressing enabled." 

364 ), 

365 705: ( 

366 "This error indicates that ::cudaDeviceDisablePeerAccess() is trying to" 

367 " disable peer addressing which has not been enabled yet via" 

368 " ::cudaDeviceEnablePeerAccess()." 

369 ), 

370 708: ( 

371 "This indicates that the user has called ::cudaSetValidDevices()," 

372 " ::cudaSetDeviceFlags(), ::cudaD3D9SetDirect3DDevice()," 

373 " ::cudaD3D10SetDirect3DDevice, ::cudaD3D11SetDirect3DDevice(), or" 

374 " ::cudaVDPAUSetVDPAUDevice() after initializing the CUDA runtime by" 

375 " calling non-device management operations (allocating memory and" 

376 " launching kernels are examples of non-device management operations)." 

377 " This error can also be returned if using runtime/driver" 

378 " interoperability and there is an existing ::CUcontext active on the" 

379 " host thread." 

380 ), 

381 709: ( 

382 "This error indicates that the context current to the calling thread" 

383 " has been destroyed using ::cuCtxDestroy, or is a primary context which" 

384 " has not yet been initialized." 

385 ), 

386 710: ( 

387 "An assert triggered in device code during kernel execution. The device" 

388 " cannot be used again. All existing allocations are invalid. To continue" 

389 " using CUDA, the process must be terminated and relaunched." 

390 ), 

391 711: ( 

392 "This error indicates that the hardware resources required to enable" 

393 " peer access have been exhausted for one or more of the devices" 

394 " passed to ::cudaEnablePeerAccess()." 

395 ), 

396 712: ("This error indicates that the memory range passed to ::cudaHostRegister() has already been registered."), 

397 713: ( 

398 "This error indicates that the pointer passed to ::cudaHostUnregister()" 

399 " does not correspond to any currently registered memory region." 

400 ), 

401 714: ( 

402 "Device encountered an error in the call stack during kernel execution," 

403 " possibly due to stack corruption or exceeding the stack size limit." 

404 " This leaves the process in an inconsistent state and any further CUDA work" 

405 " will return the same error. To continue using CUDA, the process must be terminated" 

406 " and relaunched." 

407 ), 

408 715: ( 

409 "The device encountered an illegal instruction during kernel execution" 

410 " This leaves the process in an inconsistent state and any further CUDA work" 

411 " will return the same error. To continue using CUDA, the process must be terminated" 

412 " and relaunched." 

413 ), 

414 716: ( 

415 "The device encountered a load or store instruction" 

416 " on a memory address which is not aligned." 

417 " This leaves the process in an inconsistent state and any further CUDA work" 

418 " will return the same error. To continue using CUDA, the process must be terminated" 

419 " and relaunched." 

420 ), 

421 717: ( 

422 "While executing a kernel, the device encountered an instruction" 

423 " which can only operate on memory locations in certain address spaces" 

424 " (global, shared, or local), but was supplied a memory address not" 

425 " belonging to an allowed address space." 

426 " This leaves the process in an inconsistent state and any further CUDA work" 

427 " will return the same error. To continue using CUDA, the process must be terminated" 

428 " and relaunched." 

429 ), 

430 718: ( 

431 "The device encountered an invalid program counter." 

432 " This leaves the process in an inconsistent state and any further CUDA work" 

433 " will return the same error. To continue using CUDA, the process must be terminated" 

434 " and relaunched." 

435 ), 

436 719: ( 

437 "An exception occurred on the device while executing a kernel. Common" 

438 " causes include dereferencing an invalid device pointer and accessing" 

439 " out of bounds shared memory. Less common cases can be system specific - more" 

440 " information about these cases can be found in the system specific user guide." 

441 " This leaves the process in an inconsistent state and any further CUDA work" 

442 " will return the same error. To continue using CUDA, the process must be terminated" 

443 " and relaunched." 

444 ), 

445 720: ( 

446 "This error indicates that the number of blocks launched per grid for a kernel that was" 

447 " launched via either ::cudaLaunchCooperativeKernel" 

448 " exceeds the maximum number of blocks as allowed by ::cudaOccupancyMaxActiveBlocksPerMultiprocessor" 

449 " or ::cudaOccupancyMaxActiveBlocksPerMultiprocessorWithFlags times the number of multiprocessors" 

450 " as specified by the device attribute ::cudaDevAttrMultiProcessorCount." 

451 ), 

452 721: ( 

453 "An exception occurred on the device while exiting a kernel using tensor memory: the" 

454 " tensor memory was not completely deallocated. This leaves the process in an inconsistent" 

455 " state and any further CUDA work will return the same error. To continue using CUDA, the" 

456 " process must be terminated and relaunched." 

457 ), 

458 800: "This error indicates the attempted operation is not permitted.", 

459 801: ("This error indicates the attempted operation is not supported on the current system or device."), 

460 802: ( 

461 "This error indicates that the system is not yet ready to start any CUDA" 

462 " work. To continue using CUDA, verify the system configuration is in a" 

463 " valid state and all required driver daemons are actively running." 

464 " More information about this error can be found in the system specific" 

465 " user guide." 

466 ), 

467 803: ( 

468 "This error indicates that there is a mismatch between the versions of" 

469 " the display driver and the CUDA driver. Refer to the compatibility documentation" 

470 " for supported versions." 

471 ), 

472 804: ( 

473 "This error indicates that the system was upgraded to run with forward compatibility" 

474 " but the visible hardware detected by CUDA does not support this configuration." 

475 " Refer to the compatibility documentation for the supported hardware matrix or ensure" 

476 " that only supported hardware is visible during initialization via the CUDA_VISIBLE_DEVICES" 

477 " environment variable." 

478 ), 

479 805: "This error indicates that the MPS client failed to connect to the MPS control daemon or the MPS server.", 

480 806: "This error indicates that the remote procedural call between the MPS server and the MPS client failed.", 

481 807: ( 

482 "This error indicates that the MPS server is not ready to accept new MPS client requests." 

483 " This error can be returned when the MPS server is in the process of recovering from a fatal failure." 

484 ), 

485 808: "This error indicates that the hardware resources required to create MPS client have been exhausted.", 

486 809: "This error indicates the the hardware resources required to device connections have been exhausted.", 

487 810: "This error indicates that the MPS client has been terminated by the server. To continue using CUDA, the process must be terminated and relaunched.", 

488 811: "This error indicates, that the program is using CUDA Dynamic Parallelism, but the current configuration, like MPS, does not support it.", 

489 812: "This error indicates, that the program contains an unsupported interaction between different versions of CUDA Dynamic Parallelism.", 

490 900: "The operation is not permitted when the stream is capturing.", 

491 901: ("The current capture sequence on the stream has been invalidated due to a previous error."), 

492 902: ("The operation would have resulted in a merge of two independent capture sequences."), 

493 903: "The capture was not initiated in this stream.", 

494 904: ("The capture sequence contains a fork that was not joined to the primary stream."), 

495 905: ( 

496 "A dependency would have been created which crosses the capture sequence" 

497 " boundary. Only implicit in-stream ordering dependencies are allowed to" 

498 " cross the boundary." 

499 ), 

500 906: ( 

501 "The operation would have resulted in a disallowed implicit dependency on" 

502 " a current capture sequence from cudaStreamLegacy." 

503 ), 

504 907: ("The operation is not permitted on an event which was last recorded in a capturing stream."), 

505 908: ( 

506 "A stream capture sequence not initiated with the ::cudaStreamCaptureModeRelaxed" 

507 " argument to ::cudaStreamBeginCapture was passed to ::cudaStreamEndCapture in a" 

508 " different thread." 

509 ), 

510 909: "This indicates that the wait operation has timed out.", 

511 910: ( 

512 "This error indicates that the graph update was not performed because it included" 

513 " changes which violated constraints specific to instantiated graph update." 

514 ), 

515 911: ( 

516 "This indicates that an async error has occurred in a device outside of CUDA." 

517 " If CUDA was waiting for an external device's signal before consuming shared data," 

518 " the external device signaled an error indicating that the data is not valid for" 

519 " consumption. This leaves the process in an inconsistent state and any further CUDA" 

520 " work will return the same error. To continue using CUDA, the process must be" 

521 " terminated and relaunched." 

522 ), 

523 912: ("This indicates that a kernel launch error has occurred due to cluster misconfiguration."), 

524 913: ("Indiciates a function handle is not loaded when calling an API that requires a loaded function."), 

525 914: ("This error indicates one or more resources passed in are not valid resource types for the operation."), 

526 915: ("This error indicates one or more resources are insufficient or non-applicable for the operation."), 

527 917: ( 

528 "This error indicates that the requested operation is not permitted because the" 

529 " stream is in a detached state. This can occur if the green context associated" 

530 " with the stream has been destroyed, limiting the stream's operational capabilities." 

531 ), 

532 999: "This indicates that an unknown internal error has occurred.", 

533 10000: ( 

534 "Any unhandled CUDA driver error is added to this value and returned via" 

535 " the runtime. Production releases of CUDA should not return such errors." 

536 " This error return is deprecated as of CUDA 4.1." 

537 ), 

538}