cuda::experimental::device_memory_pool#

struct device_memory_pool : public cuda::experimental::device_memory_resource#

Stream ordered memory resource#

device_memory_pool allocates device memory using cudaMallocFromPoolAsync / cudaFreeAsync for allocation/deallocation. A When constructed it creates an underlying c cudaMemPool_t with the location type set to c cudaMemLocationTypeDevice and owns it.

Public Types

using reference_type = device_memory_resource#
using default_queries = ::cuda::mr::properties_list<::cuda::mr::device_accessible>#

Public Functions

inline device_memory_pool(
::cuda::device_ref __device_id,
memory_pool_properties __properties = {},
)#

Constructs a device_memory_pool with the optionally specified initial pool size and release threshold.

If the pool size grows beyond the release threshold, unused memory held by the pool will be released at the next synchronization event.

Throws:

cuda_error – if the CUDA version does not support cudaMallocAsync.

Parameters:
  • __device_id – The device id of the device the stream pool is constructed on.

  • __pool_properties – Optional, additional properties of the pool to be created.

inline ~device_memory_pool() noexcept#
device_memory_pool(const device_memory_pool&) = delete#
device_memory_pool &operator=(const device_memory_pool&) = delete#

Public Static Functions

static inline device_memory_pool from_native_handle(
::cudaMemPool_t __pool,
) noexcept#