cuda::experimental::device_memory_pool#

class device_memory_pool : public cuda::experimental::__memory_pool_base#

device_memory_pool is an owning wrapper around a cudaMemPool_t.

It handles creation and destruction of the underlying pool utilizing the provided memory_pool_properties.

Public Functions

inline explicit device_memory_pool(
const ::cuda::device_ref __device_id,
memory_pool_properties __properties = {},
)#

Constructs a device_memory_pool with the optionally specified initial pool size and release threshold.

If the pool size grows beyond the release threshold, unused memory held by the pool will be released at the next synchronization event.

Throws:

cuda_error – if the CUDA version does not support cudaMallocAsync.

Parameters:
  • __device_id – The device id of the device the stream pool is constructed on.

  • __pool_properties – Optional, additional properties of the pool to be created.

device_memory_pool(::cudaMemPool_t) = delete#

Disables construction from a plain cudaMemPool_t. We want to ensure clean ownership semantics.

device_memory_pool(device_memory_pool const&) = delete#
device_memory_pool(device_memory_pool&&) = delete#
device_memory_pool &operator=(device_memory_pool const&) = delete#
device_memory_pool &operator=(device_memory_pool&&) = delete#

Public Static Functions

static inline device_memory_pool from_native_handle(
::cudaMemPool_t __handle,
) noexcept#

Construct an device_memory_pool object from a native cudaMemPool_t handle.

Note

The constructed device_memory_pool object takes ownership of the native handle.

Parameters:

__handle – The native handle

Returns:

The constructed device_memory_pool object

static device_memory_pool from_native_handle(int) = delete#
static device_memory_pool from_native_handle(
cuda::std::nullptr_t,
) = delete#