cuda.core.experimental.VirtualMemoryResourceOptions#
- class cuda.core.experimental.VirtualMemoryResourceOptions(
- allocation_type: ~typing.Literal['pinned',
- 'managed'] = 'pinned',
- location_type: ~typing.Literal['device',
- 'host',
- 'host_numa',
- 'host_numa_current'] = 'device',
- handle_type: ~typing.Literal['posix_fd',
- 'generic',
- 'none',
- 'win32',
- 'win32_kmt',
- 'fabric'] = 'posix_fd',
- granularity: ~typing.Literal['minimum',
- 'recommended'] = 'recommended',
- gpu_direct_rdma: bool = True,
- addr_hint: int | None = 0,
- addr_align: int | None = None,
- peers: ~typing.Iterable[int] = <factory>,
- self_access: ~typing.Literal['rw',
- 'r',
- 'none'] = 'rw',
- peer_access: ~typing.Literal['rw',
- 'r',
- 'none'] = 'rw',
- A configuration object for the VirtualMemoryResource
Stores configuration information which tells the resource how to use the CUDA VMM APIs
- Args:
- handle_type: Export handle type for the physical allocation. Use
CU_MEM_HANDLE_TYPE_POSIX_FILE_DESCRIPTOR on Linux if you plan to import/export the allocation (required for cuMemRetainAllocationHandle). Use CU_MEM_HANDLE_TYPE_NONE if you don’t need an exportable handle.
gpu_direct_rdma: Hint that the allocation should be GDR-capable (if supported). granularity: ‘recommended’ or ‘minimum’. Controls granularity query and size rounding. addr_hint: A (optional) virtual address hint to try to reserve at. 0 -> let CUDA choose. addr_align: Alignment for the VA reservation. If None, use the queried granularity. peers: Extra device IDs that should be granted access in addition to device. self_access: Access flags for the owning device (‘rw’, ‘r’, or ‘none’). peer_access: Access flags for peers (‘rw’ or ‘r’).