cuda.core.ManagedMemoryResource#

class cuda.core.ManagedMemoryResource(options=None)#

A managed memory resource managing a stream-ordered memory pool.

Managed memory is accessible from both the host and device, with automatic migration between them as needed.

Parameters:

options (ManagedMemoryResourceOptions) –

Memory resource creation options.

If set to None, the memory resource uses the driver’s current stream-ordered memory pool. If no memory pool is set as current, the driver’s default memory pool is used.

If not set to None, a new memory pool is created, which is owned by the memory resource.

When using an existing (current or default) memory pool, the returned managed memory resource does not own the pool (is_handle_owned is False), and closing the resource has no effect.

Notes

IPC (Inter-Process Communication) is not currently supported for managed memory pools.

Methods

__init__(*args, **kwargs)#
allocate(
self,
size_t size,
stream: Stream | GraphBuilder | None = None,
) Buffer#

Allocate a buffer of the requested size.

Parameters:
  • size (int) – The size of the buffer to allocate, in bytes.

  • stream (Stream | GraphBuilder, optional) – The stream on which to perform the allocation asynchronously. If None, an internal stream is used.

Returns:

The allocated buffer object, which is accessible on the device that this memory resource was created for.

Return type:

Buffer

close(self)#

Close the device memory resource and destroy the associated memory pool if owned.

deallocate(
self,
ptr: DevicePointerT,
size_t size,
stream: Stream | GraphBuilder | None = None,
)#

Deallocate a buffer previously allocated by this resource.

Parameters:
  • ptr (DevicePointerT) – The pointer or handle to the buffer to deallocate.

  • size (int) – The size of the buffer to deallocate, in bytes.

  • stream (Stream | GraphBuilder, optional) – The stream on which to perform the deallocation asynchronously. If the buffer is deallocated without an explicit stream, the allocation stream is used.

Attributes