Making Tensors#
make_tensor is a utility function for creating tensors. Where possible, using make_tensor is preferred over directly declaring a tensor_t since it allows the tensor type to change in the future without breaking. See Creating Tensors for a detailed walkthrough on creating tensors.
make_tensor provides numerous overloads for different arguments and use cases:
Return by Value#
-
template<typename T, int RANK>
auto matx::make_tensor(const index_t (&shape)[RANK], matxMemorySpace_t space = MATX_MANAGED_MEMORY, cudaStream_t stream = 0)# Create a tensor with a C array for the shape using implicitly-allocated memory
- Parameters:
shape – Shape of tensor
space – memory space to allocate in. Default is manged memory.
stream – cuda stream to allocate in (only applicable to async allocations)
- Returns:
New tensor
-
template<typename TensorType, std::enable_if_t<is_tensor_view_v<TensorType>, bool> = true>
void matx::make_tensor(TensorType &tensor, const index_t (&shape)[TensorType::Rank()], matxMemorySpace_t space = MATX_MANAGED_MEMORY, cudaStream_t stream = 0)# Create a tensor with a C array for the shape using implicitly-allocated memory
- Parameters:
tensor – Tensor object to store newly-created tensor into
shape – Shape of tensor
space – memory space to allocate in. Default is manged memory.
stream – cuda stream to allocate in (only applicable to async allocations)
-
template<typename T, typename ShapeType, std::enable_if_t<!is_matx_shape_v<ShapeType> && !is_matx_descriptor_v<ShapeType> && !std::is_array_v<typename remove_cvref<ShapeType>::type>, bool> = true>
auto matx::make_tensor(ShapeType &&shape, matxMemorySpace_t space = MATX_MANAGED_MEMORY, cudaStream_t stream = 0)# Create a tensor from a conforming container type
Conforming containers have sequential iterators defined (both const and non-const). cuda::std::array and std::vector meet this criteria.
- Parameters:
shape – Shape of tensor
space – memory space to allocate in. Default is managed memory.
stream – cuda stream to allocate in (only applicable to async allocations)
- Returns:
New tensor
-
template<typename TensorType, typename ShapeType, std::enable_if_t<is_tensor_view_v<TensorType> && !std::is_array_v<typename remove_cvref<ShapeType>::type>, bool> = true>
auto matx::make_tensor(TensorType &tensor, ShapeType &&shape, matxMemorySpace_t space = MATX_MANAGED_MEMORY, cudaStream_t stream = 0)# Create a tensor from a conforming container type
Conforming containers have sequential iterators defined (both const and non-const). cuda::std::array and std::vector meet this criteria.
- Parameters:
tensor – Tensor object to store newly-created tensor into
shape – Shape of tensor
space – memory space to allocate in. Default is managed memory.
stream – cuda stream to allocate in (only applicable to async allocations)
- Returns:
New tensor
-
template<typename TensorType, std::enable_if_t<is_tensor_view_v<TensorType>, bool> = true>
auto matx::make_tensor(TensorType &tensor, matxMemorySpace_t space = MATX_MANAGED_MEMORY, cudaStream_t stream = 0)# Create a 0D tensor with implicitly-allocated memory.
- Parameters:
tensor – Tensor object to store newly-created tensor into
space – memory space to allocate in. Default is managed memory memory.
stream – cuda stream to allocate in (only applicable to async allocations)
- Returns:
New tensor
-
template<typename T, int RANK>
auto matx::make_tensor(T *data, const index_t (&shape)[RANK], bool owning = false)# Create a tensor with user-defined memory and a C array
- Parameters:
data – Pointer to device data
shape – Shape of tensor
owning – If this class owns memory of data
- Returns:
New tensor
-
template<typename TensorType, std::enable_if_t<is_tensor_view_v<TensorType>, bool> = true>
auto matx::make_tensor(TensorType &tensor, typename TensorType::value_type *data, const index_t (&shape)[TensorType::Rank()])# Create a tensor with user-defined memory and a C array
- Parameters:
tensor – Tensor object to store newly-created tensor into
data – Pointer to device data
shape – Shape of tensor
- Returns:
New tensor
-
template<typename T, typename ShapeType, std::enable_if_t<!is_matx_descriptor_v<ShapeType> && !std::is_array_v<typename remove_cvref<ShapeType>::type>, bool> = true>
auto matx::make_tensor(T *data, ShapeType &&shape, bool owning = false)# Create a tensor with user-defined memory and conforming shape type
- Parameters:
data – Pointer to device data
shape – Shape of tensor
owning – If this class owns memory of data
- Returns:
New tensor
-
template<typename TensorType, std::enable_if_t<is_tensor_view_v<TensorType>, bool> = true>
auto matx::make_tensor(TensorType &tensor, typename TensorType::value_type *data, typename TensorType::shape_container &&shape)# Create a tensor with user-defined memory and conforming shape type
- Parameters:
tensor – Tensor object to store newly-created tensor into
data – Pointer to device data
shape – Shape of tensor
- Returns:
New tensor
-
template<typename TensorType, std::enable_if_t<is_tensor_view_v<TensorType>, bool> = true>
auto matx::make_tensor(TensorType &tensor, typename TensorType::value_type *ptr)# Create a 0D tensor with user-defined memory
- Parameters:
tensor – Tensor object to store newly-created tensor into
ptr – Pointer to data
- Returns:
New tensor
-
template<typename T, typename D, std::enable_if_t<is_matx_descriptor_v<typename remove_cvref<D>::type>, bool> = true>
auto matx::make_tensor(T *const data, D &&desc, bool owning = false)# Create a tensor with user-defined memory and an existing descriptor
- Parameters:
data – Pointer to device data
desc – Tensor descriptor (tensor_desc_t)
owning – If this class owns memory of data
- Returns:
New tensor
-
template<typename TensorType, std::enable_if_t<is_tensor_view_v<TensorType>, bool> = true>
auto matx::make_tensor(TensorType &tensor, typename TensorType::value_type *const data, typename TensorType::desc_type &&desc)# Create a tensor with user-defined memory and an existing descriptor
- Parameters:
tensor – Tensor object to store newly-created tensor into
data – Pointer to device data
desc – Tensor descriptor (tensor_desc_t)
- Returns:
New tensor
-
template<typename T, typename D, std::enable_if_t<is_matx_descriptor_v<typename remove_cvref<D>::type>, bool> = true>
auto matx::make_tensor(D &&desc, matxMemorySpace_t space = MATX_MANAGED_MEMORY, cudaStream_t stream = 0)# Create a tensor with implicitly-allocated memory and an existing descriptor
- Parameters:
desc – Tensor descriptor (tensor_desc_t)
space – memory space to allocate in. Default is managed memory memory.
stream – cuda stream to allocate in (only applicable to async allocations)
- Returns:
New tensor
-
template<typename TensorType, std::enable_if_t<is_tensor_view_v<TensorType> && is_matx_descriptor_v<typename TensorType::desc_type>, bool> = true>
auto matx::make_tensor(TensorType &&tensor, typename TensorType::desc_type &&desc, matxMemorySpace_t space = MATX_MANAGED_MEMORY, cudaStream_t stream = 0)# Create a tensor with implicitly-allocated memory and an existing descriptor
- Parameters:
tensor – Tensor object to store newly-created tensor into
desc – Tensor descriptor (tensor_desc_t)
space – memory space to allocate in. Default is managed memory memory.
stream – cuda stream to allocate in (only applicable to async allocations)
- Returns:
New tensor
-
template<typename T, int RANK>
auto matx::make_tensor(T *const data, const index_t (&shape)[RANK], const index_t (&strides)[RANK], bool owning = false)# Create a tensor with user-defined memory and C-array shapes and strides
- Parameters:
data – Pointer to device data
shape – Shape of tensor
strides – Strides of tensor
owning – If this class owns memory of data
- Returns:
New tensor
-
template<typename TensorType, std::enable_if_t<is_tensor_view_v<TensorType>, bool> = true>
auto matx::make_tensor(TensorType &tensor, typename TensorType::value_type *const data, const index_t (&shape)[TensorType::Rank()], const index_t (&strides)[TensorType::Rank()])# Create a tensor with user-defined memory and C-array shapes and strides
- Parameters:
tensor – Tensor object to store newly-created tensor into
data – Pointer to device data
shape – Shape of tensor
strides – Strides of tensor
- Returns:
New tensor
Custom Allocator Support#
-
template<typename T, int RANK, typename Allocator>
auto matx::make_tensor(const index_t (&shape)[RANK], Allocator &&alloc)# Create a tensor with custom allocator using C-array shape
- Parameters:
shape – Shape of tensor as C-array
alloc – Custom allocator (PMR allocator, custom allocator pointer, etc.)
- Returns:
New tensor
-
template<typename T, typename ShapeType, typename Allocator, std::enable_if_t<!is_matx_shape_v<ShapeType> && !is_matx_descriptor_v<ShapeType> && !std::is_array_v<typename remove_cvref<ShapeType>::type>, bool> = true>
auto matx::make_tensor(ShapeType &&shape, Allocator &&alloc)# Create a tensor with custom allocator using conforming shape type
- Parameters:
shape – Shape of tensor (tuple, array, etc.)
alloc – Custom allocator (PMR allocator, custom allocator pointer, etc.)
- Returns:
New tensor
-
template<typename TensorType, typename Allocator, std::enable_if_t<is_tensor_view_v<TensorType>, bool> = true>
void matx::make_tensor(TensorType &tensor, const index_t (&shape)[TensorType::Rank()], Allocator &&alloc)# Create a tensor with custom allocator using existing tensor reference
- Parameters:
tensor – Tensor object to store newly-created tensor into
shape – Shape of tensor as C-array
alloc – Custom allocator (PMR allocator, custom allocator pointer, etc.)
-
template<typename TensorType, typename ShapeType, typename Allocator, std::enable_if_t<is_tensor_view_v<TensorType> && !std::is_array_v<typename remove_cvref<ShapeType>::type>, bool> = true>
void matx::make_tensor(TensorType &tensor, ShapeType &&shape, Allocator &&alloc)# Create a tensor with custom allocator using existing tensor reference and conforming shape
- Parameters:
tensor – Tensor object to store newly-created tensor into
shape – Shape of tensor (tuple, array, etc.)
alloc – Custom allocator (PMR allocator, custom allocator pointer, etc.)
Return by Pointer#
-
template<typename T, int RANK>
auto matx::make_tensor_p(const index_t (&shape)[RANK], matxMemorySpace_t space = MATX_MANAGED_MEMORY, cudaStream_t stream = 0)# Create a tensor with a C array for the shape using implicitly-allocated memory. Caller is responsible for deleting the tensor.
- Parameters:
shape – Shape of tensor
space – memory space to allocate in. Default is managed memory.
stream – cuda stream to allocate in (only applicable to async allocations)
- Returns:
Pointer to new tensor
-
template<typename T, typename ShapeType, std::enable_if_t<!is_matx_shape_v<ShapeType> && !std::is_array_v<typename remove_cvref<ShapeType>::type>, bool> = true>
auto matx::make_tensor_p(ShapeType &&shape, matxMemorySpace_t space = MATX_MANAGED_MEMORY, cudaStream_t stream = 0)# Create a tensor from a conforming container type
Conforming containers have sequential iterators defined (both const and non-const). cuda::std::array and std::vector meet this criteria. Caller is responsible for deleting tensor.
- Parameters:
shape – Shape of tensor
space – memory space to allocate in. Default is managed memory memory.
stream – cuda stream to allocate in (only applicable to async allocations)
- Returns:
Pointer to new tensor
-
template<typename T, typename ShapeType, std::enable_if_t<!is_matx_descriptor_v<ShapeType> && !std::is_array_v<typename remove_cvref<ShapeType>::type>, bool> = true>
auto matx::make_tensor_p(T *const data, ShapeType &&shape, bool owning = false)# Create a tensor with user-defined memory and conforming shape type
- Parameters:
data – Pointer to device data
shape – Shape of tensor
owning – If this class owns memory of data
- Returns:
New tensor