Coverage for cuda / core / _utils / runtime_cuda_error_explanations.py: 100.00%

1 statements  

« prev     ^ index     » next       coverage.py v7.13.5, created at 2026-03-25 01:07 +0000

1# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. 

2# SPDX-License-Identifier: LicenseRef-NVIDIA-SOFTWARE-LICENSE 

3 

4# To regenerate the dictionary below run: 

5# ../../../../../toolshed/reformat_cuda_enums_as_py.py /usr/local/cuda/include/driver_types.h 

6# Replace the dictionary below with the output. 

7# Also update the CUDA Toolkit version number below. 

8 

9# CUDA Toolkit v13.2.0 

10RUNTIME_CUDA_ERROR_EXPLANATIONS = { 

11 0: ( 

12 "The API call returned with no errors. In the case of query calls, this" 

13 " also means that the operation being queried is complete (see" 

14 " ::cudaEventQuery() and ::cudaStreamQuery())." 

15 ), 

16 1: ( 

17 "This indicates that one or more of the parameters passed to the API call" 

18 " is not within an acceptable range of values." 

19 ), 

20 2: ( 

21 "The API call failed because it was unable to allocate enough memory or" 

22 " other resources to perform the requested operation." 

23 ), 

24 3: ("The API call failed because the CUDA driver and runtime could not be initialized."), 

25 4: ( 

26 "This indicates that a CUDA Runtime API call cannot be executed because" 

27 " it is being called during process shut down, at a point in time after" 

28 " CUDA driver has been unloaded." 

29 ), 

30 5: ( 

31 "This indicates profiler is not initialized for this run. This can" 

32 " happen when the application is running with external profiling tools" 

33 " like visual profiler." 

34 ), 

35 6: ( 

36 "This error return is deprecated as of CUDA 5.0. It is no longer an error" 

37 " to attempt to enable/disable the profiling via ::cudaProfilerStart or" 

38 " ::cudaProfilerStop without initialization." 

39 ), 

40 7: ( 

41 "This error return is deprecated as of CUDA 5.0. It is no longer an error" 

42 " to call cudaProfilerStart() when profiling is already enabled." 

43 ), 

44 8: ( 

45 "This error return is deprecated as of CUDA 5.0. It is no longer an error" 

46 " to call cudaProfilerStop() when profiling is already disabled." 

47 ), 

48 9: ( 

49 "This indicates that a kernel launch is requesting resources that can" 

50 " never be satisfied by the current device. Requesting more shared memory" 

51 " per block than the device supports will trigger this error, as will" 

52 " requesting too many threads or blocks. See ::cudaDeviceProp for more" 

53 " device limitations." 

54 ), 

55 10: ( 

56 "This indicates that the driver is newer than the runtime version" 

57 " and returned graph node parameter information that the runtime" 

58 " does not understand and is unable to translate." 

59 ), 

60 12: ( 

61 "This indicates that one or more of the pitch-related parameters passed" 

62 " to the API call is not within the acceptable range for pitch." 

63 ), 

64 13: ("This indicates that the symbol name/identifier passed to the API call is not a valid name or identifier."), 

65 16: ( 

66 "This indicates that at least one host pointer passed to the API call is" 

67 " not a valid host pointer." 

68 " This error return is deprecated as of CUDA 10.1." 

69 ), 

70 17: ( 

71 "This indicates that at least one device pointer passed to the API call is" 

72 " not a valid device pointer." 

73 " This error return is deprecated as of CUDA 10.1." 

74 ), 

75 18: ("This indicates that the texture passed to the API call is not a valid texture."), 

76 19: ( 

77 "This indicates that the texture binding is not valid. This occurs if you" 

78 " call ::cudaGetTextureAlignmentOffset() with an unbound texture." 

79 ), 

80 20: ( 

81 "This indicates that the channel descriptor passed to the API call is not" 

82 " valid. This occurs if the format is not one of the formats specified by" 

83 " ::cudaChannelFormatKind, or if one of the dimensions is invalid." 

84 ), 

85 21: ( 

86 "This indicates that the direction of the memcpy passed to the API call is" 

87 " not one of the types specified by ::cudaMemcpyKind." 

88 ), 

89 22: ( 

90 "This indicated that the user has taken the address of a constant variable," 

91 " which was forbidden up until the CUDA 3.1 release." 

92 " This error return is deprecated as of CUDA 3.1. Variables in constant" 

93 " memory may now have their address taken by the runtime via" 

94 " ::cudaGetSymbolAddress()." 

95 ), 

96 23: ( 

97 "This indicated that a texture fetch was not able to be performed." 

98 " This was previously used for device emulation of texture operations." 

99 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

100 " removed with the CUDA 3.1 release." 

101 ), 

102 24: ( 

103 "This indicated that a texture was not bound for access." 

104 " This was previously used for device emulation of texture operations." 

105 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

106 " removed with the CUDA 3.1 release." 

107 ), 

108 25: ( 

109 "This indicated that a synchronization operation had failed." 

110 " This was previously used for some device emulation functions." 

111 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

112 " removed with the CUDA 3.1 release." 

113 ), 

114 26: ( 

115 "This indicates that a non-float texture was being accessed with linear" 

116 " filtering. This is not supported by CUDA." 

117 ), 

118 27: ( 

119 "This indicates that an attempt was made to read an unsupported data type as a" 

120 " normalized float. This is not supported by CUDA." 

121 ), 

122 28: ( 

123 "Mixing of device and device emulation code was not allowed." 

124 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

125 " removed with the CUDA 3.1 release." 

126 ), 

127 31: ( 

128 "This indicates that the API call is not yet implemented. Production" 

129 " releases of CUDA will never return this error." 

130 " This error return is deprecated as of CUDA 4.1." 

131 ), 

132 32: ( 

133 "This indicated that an emulated device pointer exceeded the 32-bit address" 

134 " range." 

135 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

136 " removed with the CUDA 3.1 release." 

137 ), 

138 34: ( 

139 "This indicates that the CUDA driver that the application has loaded is a" 

140 " stub library. Applications that run with the stub rather than a real" 

141 " driver loaded will result in CUDA API returning this error." 

142 ), 

143 35: ( 

144 "This indicates that the installed NVIDIA CUDA driver is older than the" 

145 " CUDA runtime library. This is not a supported configuration. Users should" 

146 " install an updated NVIDIA display driver to allow the application to run." 

147 ), 

148 36: ( 

149 "This indicates that the API call requires a newer CUDA driver than the one" 

150 " currently installed. Users should install an updated NVIDIA CUDA driver" 

151 " to allow the API call to succeed." 

152 ), 

153 37: ("This indicates that the surface passed to the API call is not a valid surface."), 

154 43: ( 

155 "This indicates that multiple global or constant variables (across separate" 

156 " CUDA source files in the application) share the same string name." 

157 ), 

158 44: ( 

159 "This indicates that multiple textures (across separate CUDA source" 

160 " files in the application) share the same string name." 

161 ), 

162 45: ( 

163 "This indicates that multiple surfaces (across separate CUDA source" 

164 " files in the application) share the same string name." 

165 ), 

166 46: ( 

167 "This indicates that all CUDA devices are busy or unavailable at the current" 

168 " time. Devices are often busy/unavailable due to use of" 

169 " ::cudaComputeModeProhibited, ::cudaComputeModeExclusiveProcess, or when long" 

170 " running CUDA kernels have filled up the GPU and are blocking new work" 

171 " from starting. They can also be unavailable due to memory constraints" 

172 " on a device that already has active CUDA work being performed." 

173 ), 

174 49: ( 

175 "This indicates that the current context is not compatible with this" 

176 " the CUDA Runtime. This can only occur if you are using CUDA" 

177 " Runtime/Driver interoperability and have created an existing Driver" 

178 " context using the driver API. The Driver context may be incompatible" 

179 " either because the Driver context was created using an older version" 

180 " of the API, because the Runtime API call expects a primary driver" 

181 " context and the Driver context is not primary, or because the Driver" 

182 ' context has been destroyed. Please see CUDART_DRIVER "Interactions' 

183 ' with the CUDA Driver API" for more information.' 

184 ), 

185 52: ( 

186 "The device function being invoked (usually via ::cudaLaunchKernel()) was not" 

187 " previously configured via the ::cudaConfigureCall() function." 

188 ), 

189 53: ( 

190 "This indicated that a previous kernel launch failed. This was previously" 

191 " used for device emulation of kernel launches." 

192 " This error return is deprecated as of CUDA 3.1. Device emulation mode was" 

193 " removed with the CUDA 3.1 release." 

194 ), 

195 65: ( 

196 "This error indicates that a device runtime grid launch did not occur" 

197 " because the depth of the child grid would exceed the maximum supported" 

198 " number of nested grid launches." 

199 ), 

200 66: ( 

201 "This error indicates that a grid launch did not occur because the kernel" 

202 " uses file-scoped textures which are unsupported by the device runtime." 

203 " Kernels launched via the device runtime only support textures created with" 

204 " the Texture Object API's." 

205 ), 

206 67: ( 

207 "This error indicates that a grid launch did not occur because the kernel" 

208 " uses file-scoped surfaces which are unsupported by the device runtime." 

209 " Kernels launched via the device runtime only support surfaces created with" 

210 " the Surface Object API's." 

211 ), 

212 68: ( 

213 "This error indicates that a call to ::cudaDeviceSynchronize made from" 

214 " the device runtime failed because the call was made at grid depth greater" 

215 " than than either the default (2 levels of grids) or user specified device" 

216 " limit ::cudaLimitDevRuntimeSyncDepth. To be able to synchronize on" 

217 " launched grids at a greater depth successfully, the maximum nested" 

218 " depth at which ::cudaDeviceSynchronize will be called must be specified" 

219 " with the ::cudaLimitDevRuntimeSyncDepth limit to the ::cudaDeviceSetLimit" 

220 " api before the host-side launch of a kernel using the device runtime." 

221 " Keep in mind that additional levels of sync depth require the runtime" 

222 " to reserve large amounts of device memory that cannot be used for" 

223 " user allocations. Note that ::cudaDeviceSynchronize made from device" 

224 " runtime is only supported on devices of compute capability < 9.0." 

225 ), 

226 69: ( 

227 "This error indicates that a device runtime grid launch failed because" 

228 " the launch would exceed the limit ::cudaLimitDevRuntimePendingLaunchCount." 

229 " For this launch to proceed successfully, ::cudaDeviceSetLimit must be" 

230 " called to set the ::cudaLimitDevRuntimePendingLaunchCount to be higher" 

231 " than the upper bound of outstanding launches that can be issued to the" 

232 " device runtime. Keep in mind that raising the limit of pending device" 

233 " runtime launches will require the runtime to reserve device memory that" 

234 " cannot be used for user allocations." 

235 ), 

236 98: ("The requested device function does not exist or is not compiled for the proper device architecture."), 

237 100: ("This indicates that no CUDA-capable devices were detected by the installed CUDA driver."), 

238 101: ( 

239 "This indicates that the device ordinal supplied by the user does not" 

240 " correspond to a valid CUDA device or that the action requested is" 

241 " invalid for the specified device." 

242 ), 

243 102: "This indicates that the device doesn't have a valid Grid License.", 

244 103: ( 

245 "By default, the CUDA runtime may perform a minimal set of self-tests," 

246 " as well as CUDA driver tests, to establish the validity of both." 

247 " Introduced in CUDA 11.2, this error return indicates that at least one" 

248 " of these tests has failed and the validity of either the runtime" 

249 " or the driver could not be established." 

250 ), 

251 127: "This indicates an internal startup failure in the CUDA runtime.", 

252 200: "This indicates that the device kernel image is invalid.", 

253 201: ( 

254 "This most frequently indicates that there is no context bound to the" 

255 " current thread. This can also be returned if the context passed to an" 

256 " API call is not a valid handle (such as a context that has had" 

257 " ::cuCtxDestroy() invoked on it). This can also be returned if a user" 

258 " mixes different API versions (i.e. 3010 context with 3020 API calls)." 

259 " See ::cuCtxGetApiVersion() for more details." 

260 ), 

261 205: "This indicates that the buffer object could not be mapped.", 

262 206: "This indicates that the buffer object could not be unmapped.", 

263 207: ("This indicates that the specified array is currently mapped and thus cannot be destroyed."), 

264 208: "This indicates that the resource is already mapped.", 

265 209: ( 

266 "This indicates that there is no kernel image available that is suitable" 

267 " for the device. This can occur when a user specifies code generation" 

268 " options for a particular CUDA source file that do not include the" 

269 " corresponding device configuration." 

270 ), 

271 210: "This indicates that a resource has already been acquired.", 

272 211: "This indicates that a resource is not mapped.", 

273 212: ("This indicates that a mapped resource is not available for access as an array."), 

274 213: ("This indicates that a mapped resource is not available for access as a pointer."), 

275 214: ("This indicates that an uncorrectable ECC error was detected during execution."), 

276 215: ("This indicates that the ::cudaLimit passed to the API call is not supported by the active device."), 

277 216: ( 

278 "This indicates that a call tried to access an exclusive-thread device that" 

279 " is already in use by a different thread." 

280 ), 

281 217: ("This error indicates that P2P access is not supported across the given devices."), 

282 218: ( 

283 "A PTX compilation failed. The runtime may fall back to compiling PTX if" 

284 " an application does not contain a suitable binary for the current device." 

285 ), 

286 219: "This indicates an error with the OpenGL or DirectX context.", 

287 220: ("This indicates that an uncorrectable NVLink error was detected during the execution."), 

288 221: ( 

289 "This indicates that the PTX JIT compiler library was not found. The JIT Compiler" 

290 " library is used for PTX compilation. The runtime may fall back to compiling PTX" 

291 " if an application does not contain a suitable binary for the current device." 

292 ), 

293 222: ( 

294 "This indicates that the provided PTX was compiled with an unsupported toolchain." 

295 " The most common reason for this, is the PTX was generated by a compiler newer" 

296 " than what is supported by the CUDA driver and PTX JIT compiler." 

297 ), 

298 223: ( 

299 "This indicates that the JIT compilation was disabled. The JIT compilation compiles" 

300 " PTX. The runtime may fall back to compiling PTX if an application does not contain" 

301 " a suitable binary for the current device." 

302 ), 

303 224: "This indicates that the provided execution affinity is not supported by the device.", 

304 225: ( 

305 "This indicates that the code to be compiled by the PTX JIT contains unsupported call to cudaDeviceSynchronize." 

306 ), 

307 226: ( 

308 "This indicates that an exception occurred on the device that is now" 

309 " contained by the GPU's error containment capability. Common causes are -" 

310 " a. Certain types of invalid accesses of peer GPU memory over nvlink" 

311 " b. Certain classes of hardware errors" 

312 " This leaves the process in an inconsistent state and any further CUDA" 

313 " work will return the same error. To continue using CUDA, the process must" 

314 " be terminated and relaunched." 

315 ), 

316 300: "This indicates that the device kernel source is invalid.", 

317 301: "This indicates that the file specified was not found.", 

318 302: "This indicates that a link to a shared object failed to resolve.", 

319 303: "This indicates that initialization of a shared object failed.", 

320 304: "This error indicates that an OS call failed.", 

321 400: ( 

322 "This indicates that a resource handle passed to the API call was not" 

323 " valid. Resource handles are opaque types like ::cudaStream_t and" 

324 " ::cudaEvent_t." 

325 ), 

326 401: ( 

327 "This indicates that a resource required by the API call is not in a" 

328 " valid state to perform the requested operation." 

329 ), 

330 402: ( 

331 "This indicates an attempt was made to introspect an object in a way that" 

332 " would discard semantically important information. This is either due to" 

333 " the object using funtionality newer than the API version used to" 

334 " introspect it or omission of optional return arguments." 

335 ), 

336 500: ( 

337 "This indicates that a named symbol was not found. Examples of symbols" 

338 " are global/constant variable names, driver function names, texture names," 

339 " and surface names." 

340 ), 

341 600: ( 

342 "This indicates that asynchronous operations issued previously have not" 

343 " completed yet. This result is not actually an error, but must be indicated" 

344 " differently than ::cudaSuccess (which indicates completion). Calls that" 

345 " may return this value include ::cudaEventQuery() and ::cudaStreamQuery()." 

346 ), 

347 700: ( 

348 "The device encountered a load or store instruction on an invalid memory address." 

349 " This leaves the process in an inconsistent state and any further CUDA work" 

350 " will return the same error. To continue using CUDA, the process must be terminated" 

351 " and relaunched." 

352 ), 

353 701: ( 

354 "This indicates that a launch did not occur because it did not have" 

355 " appropriate resources. Although this error is similar to" 

356 " ::cudaErrorInvalidConfiguration, this error usually indicates that the" 

357 " user has attempted to pass too many arguments to the device kernel, or the" 

358 " kernel launch specifies too many threads for the kernel's register count." 

359 ), 

360 702: ( 

361 "This indicates that the device kernel took too long to execute. This can" 

362 " only occur if timeouts are enabled - see the device attribute" 

363 ' ::cudaDeviceAttr::cudaDevAttrKernelExecTimeout "cudaDevAttrKernelExecTimeout"' 

364 " for more information." 

365 " This leaves the process in an inconsistent state and any further CUDA work" 

366 " will return the same error. To continue using CUDA, the process must be terminated" 

367 " and relaunched." 

368 ), 

369 703: ("This error indicates a kernel launch that uses an incompatible texturing mode."), 

370 704: ( 

371 "This error indicates that a call to ::cudaDeviceEnablePeerAccess() is" 

372 " trying to re-enable peer addressing on from a context which has already" 

373 " had peer addressing enabled." 

374 ), 

375 705: ( 

376 "This error indicates that ::cudaDeviceDisablePeerAccess() is trying to" 

377 " disable peer addressing which has not been enabled yet via" 

378 " ::cudaDeviceEnablePeerAccess()." 

379 ), 

380 708: ( 

381 "This indicates that the user has called ::cudaSetValidDevices()," 

382 " ::cudaSetDeviceFlags(), ::cudaD3D9SetDirect3DDevice()," 

383 " ::cudaD3D10SetDirect3DDevice, ::cudaD3D11SetDirect3DDevice(), or" 

384 " ::cudaVDPAUSetVDPAUDevice() after initializing the CUDA runtime by" 

385 " calling non-device management operations (allocating memory and" 

386 " launching kernels are examples of non-device management operations)." 

387 " This error can also be returned if using runtime/driver" 

388 " interoperability and there is an existing ::CUcontext active on the" 

389 " host thread." 

390 ), 

391 709: ( 

392 "This error indicates that the context current to the calling thread" 

393 " has been destroyed using ::cuCtxDestroy, or is a primary context which" 

394 " has not yet been initialized." 

395 ), 

396 710: ( 

397 "An assert triggered in device code during kernel execution. The device" 

398 " cannot be used again. All existing allocations are invalid. To continue" 

399 " using CUDA, the process must be terminated and relaunched." 

400 ), 

401 711: ( 

402 "This error indicates that the hardware resources required to enable" 

403 " peer access have been exhausted for one or more of the devices" 

404 " passed to ::cudaEnablePeerAccess()." 

405 ), 

406 712: ("This error indicates that the memory range passed to ::cudaHostRegister() has already been registered."), 

407 713: ( 

408 "This error indicates that the pointer passed to ::cudaHostUnregister()" 

409 " does not correspond to any currently registered memory region." 

410 ), 

411 714: ( 

412 "Device encountered an error in the call stack during kernel execution," 

413 " possibly due to stack corruption or exceeding the stack size limit." 

414 " This leaves the process in an inconsistent state and any further CUDA work" 

415 " will return the same error. To continue using CUDA, the process must be terminated" 

416 " and relaunched." 

417 ), 

418 715: ( 

419 "The device encountered an illegal instruction during kernel execution" 

420 " This leaves the process in an inconsistent state and any further CUDA work" 

421 " will return the same error. To continue using CUDA, the process must be terminated" 

422 " and relaunched." 

423 ), 

424 716: ( 

425 "The device encountered a load or store instruction" 

426 " on a memory address which is not aligned." 

427 " This leaves the process in an inconsistent state and any further CUDA work" 

428 " will return the same error. To continue using CUDA, the process must be terminated" 

429 " and relaunched." 

430 ), 

431 717: ( 

432 "While executing a kernel, the device encountered an instruction" 

433 " which can only operate on memory locations in certain address spaces" 

434 " (global, shared, or local), but was supplied a memory address not" 

435 " belonging to an allowed address space." 

436 " This leaves the process in an inconsistent state and any further CUDA work" 

437 " will return the same error. To continue using CUDA, the process must be terminated" 

438 " and relaunched." 

439 ), 

440 718: ( 

441 "The device encountered an invalid program counter." 

442 " This leaves the process in an inconsistent state and any further CUDA work" 

443 " will return the same error. To continue using CUDA, the process must be terminated" 

444 " and relaunched." 

445 ), 

446 719: ( 

447 "An exception occurred on the device while executing a kernel. Common" 

448 " causes include dereferencing an invalid device pointer and accessing" 

449 " out of bounds shared memory. Less common cases can be system specific - more" 

450 " information about these cases can be found in the system specific user guide." 

451 " This leaves the process in an inconsistent state and any further CUDA work" 

452 " will return the same error. To continue using CUDA, the process must be terminated" 

453 " and relaunched." 

454 ), 

455 720: ( 

456 "This error indicates that the number of blocks launched per grid for a kernel that was" 

457 " launched via either ::cudaLaunchCooperativeKernel" 

458 " exceeds the maximum number of blocks as allowed by ::cudaOccupancyMaxActiveBlocksPerMultiprocessor" 

459 " or ::cudaOccupancyMaxActiveBlocksPerMultiprocessorWithFlags times the number of multiprocessors" 

460 " as specified by the device attribute ::cudaDevAttrMultiProcessorCount." 

461 ), 

462 721: ( 

463 "An exception occurred on the device while exiting a kernel using tensor memory: the" 

464 " tensor memory was not completely deallocated. This leaves the process in an inconsistent" 

465 " state and any further CUDA work will return the same error. To continue using CUDA, the" 

466 " process must be terminated and relaunched." 

467 ), 

468 800: "This error indicates the attempted operation is not permitted.", 

469 801: ("This error indicates the attempted operation is not supported on the current system or device."), 

470 802: ( 

471 "This error indicates that the system is not yet ready to start any CUDA" 

472 " work. To continue using CUDA, verify the system configuration is in a" 

473 " valid state and all required driver daemons are actively running." 

474 " More information about this error can be found in the system specific" 

475 " user guide." 

476 ), 

477 803: ( 

478 "This error indicates that there is a mismatch between the versions of" 

479 " the display driver and the CUDA driver. Refer to the compatibility documentation" 

480 " for supported versions." 

481 ), 

482 804: ( 

483 "This error indicates that the system was upgraded to run with forward compatibility" 

484 " but the visible hardware detected by CUDA does not support this configuration." 

485 " Refer to the compatibility documentation for the supported hardware matrix or ensure" 

486 " that only supported hardware is visible during initialization via the CUDA_VISIBLE_DEVICES" 

487 " environment variable." 

488 ), 

489 805: "This error indicates that the MPS client failed to connect to the MPS control daemon or the MPS server.", 

490 806: "This error indicates that the remote procedural call between the MPS server and the MPS client failed.", 

491 807: ( 

492 "This error indicates that the MPS server is not ready to accept new MPS client requests." 

493 " This error can be returned when the MPS server is in the process of recovering from a fatal failure." 

494 ), 

495 808: "This error indicates that the hardware resources required to create MPS client have been exhausted.", 

496 809: "This error indicates the the hardware resources required to device connections have been exhausted.", 

497 810: "This error indicates that the MPS client has been terminated by the server. To continue using CUDA, the process must be terminated and relaunched.", 

498 811: "This error indicates, that the program is using CUDA Dynamic Parallelism, but the current configuration, like MPS, does not support it.", 

499 812: "This error indicates, that the program contains an unsupported interaction between different versions of CUDA Dynamic Parallelism.", 

500 900: "The operation is not permitted when the stream is capturing.", 

501 901: ("The current capture sequence on the stream has been invalidated due to a previous error."), 

502 902: ("The operation would have resulted in a merge of two independent capture sequences."), 

503 903: "The capture was not initiated in this stream.", 

504 904: ("The capture sequence contains a fork that was not joined to the primary stream."), 

505 905: ( 

506 "A dependency would have been created which crosses the capture sequence" 

507 " boundary. Only implicit in-stream ordering dependencies are allowed to" 

508 " cross the boundary." 

509 ), 

510 906: ( 

511 "The operation would have resulted in a disallowed implicit dependency on" 

512 " a current capture sequence from cudaStreamLegacy." 

513 ), 

514 907: ("The operation is not permitted on an event which was last recorded in a capturing stream."), 

515 908: ( 

516 "A stream capture sequence not initiated with the ::cudaStreamCaptureModeRelaxed" 

517 " argument to ::cudaStreamBeginCapture was passed to ::cudaStreamEndCapture in a" 

518 " different thread." 

519 ), 

520 909: "This indicates that the wait operation has timed out.", 

521 910: ( 

522 "This error indicates that the graph update was not performed because it included" 

523 " changes which violated constraints specific to instantiated graph update." 

524 ), 

525 911: ( 

526 "This indicates that an error has occurred in a device outside of GPU. It can be a" 

527 " synchronous error w.r.t. CUDA API or an asynchronous error from the external device." 

528 " In case of asynchronous error, it means that if cuda was waiting for an external device's" 

529 " signal before consuming shared data, the external device signaled an error indicating that" 

530 " the data is not valid for consumption. This leaves the process in an inconsistent" 

531 " state and any further CUDA work will return the same error. To continue using CUDA," 

532 " the process must be terminated and relaunched." 

533 " In case of synchronous error, it means that one or more external devices" 

534 " have encountered an error and cannot complete the operation." 

535 ), 

536 912: ("This indicates that a kernel launch error has occurred due to cluster misconfiguration."), 

537 913: ("Indiciates a function handle is not loaded when calling an API that requires a loaded function."), 

538 914: ("This error indicates one or more resources passed in are not valid resource types for the operation."), 

539 915: ("This error indicates one or more resources are insufficient or non-applicable for the operation."), 

540 917: ( 

541 "This error indicates that the requested operation is not permitted because the" 

542 " stream is in a detached state. This can occur if the green context associated" 

543 " with the stream has been destroyed, limiting the stream's operational capabilities." 

544 ), 

545 999: "This indicates that an unknown internal error has occurred.", 

546 10000: ( 

547 "Any unhandled CUDA driver error is added to this value and returned via" 

548 " the runtime. Production releases of CUDA should not return such errors." 

549 " This error return is deprecated as of CUDA 4.1." 

550 ), 

551}