MinkowskiEngine package¶
Submodules¶
MinkowskiEngine.MinkowskiBroadcast module¶
-
class
MinkowskiEngine.MinkowskiBroadcast.
MinkowskiBroadcast
¶ Bases:
torch.nn.modules.module.Module
Broadcast reduced features to all input coordinates.
\[\mathbf{y}_\mathbf{u} = \mathbf{x}_2 \; \text{for} \; \mathbf{u} \in \mathcal{C}^\text{in}\]For all input \(\mathbf{x}_\mathbf{u}\), copy value \(\mathbf{x}_2\) element-wise. The output coordinates will be the same as the input coordinates \(\mathcal{C}^\text{in} = \mathcal{C}^\text{out}\). The first input \(\mathbf{x}_1\) is only used for defining the output coordinates.
Note
The first argument takes a sparse tensor; the second argument takes features that are reduced to the origin. This can be typically done with the global reduction such as the
MinkowskiGlobalPooling
.-
forward
(input: MinkowskiSparseTensor.SparseTensor, input_glob: MinkowskiSparseTensor.SparseTensor)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiBroadcast.
MinkowskiBroadcastAddition
¶ Bases:
MinkowskiEngine.MinkowskiBroadcast.MinkowskiBroadcastBase
Broadcast the reduced features to all input coordinates.
\[\mathbf{y}_\mathbf{u} = \mathbf{x}_{1, \mathbf{u}} + \mathbf{x}_2 \; \text{for} \; \mathbf{u} \in \mathcal{C}^\text{in}\]For all input \(\mathbf{x}_\mathbf{u}\), add \(\mathbf{x}_2\). The output coordinates will be the same as the input coordinates \(\mathcal{C}^\text{in} = \mathcal{C}^\text{out}\).
Note
The first argument takes a sparse tensor; the second argument takes features that are reduced to the origin. This can be typically done with the global reduction such as the
MinkowskiGlobalPooling
.-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiBroadcast.
MinkowskiBroadcastBase
(operation_type)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
-
forward
(input: MinkowskiSparseTensor.SparseTensor, input_glob: MinkowskiSparseTensor.SparseTensor)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiBroadcast.
MinkowskiBroadcastConcatenation
¶ Bases:
MinkowskiEngine.MinkowskiBroadcast.MinkowskiBroadcast
Broadcast reduced features to all input coordinates and concatenate to the input.
\[\mathbf{y}_\mathbf{u} = [\mathbf{x}_{1,\mathbf{u}}, \mathbf{x}_2] \; \text{for} \; \mathbf{u} \in \mathcal{C}^\text{in}\]For all input \(\mathbf{x}_\mathbf{u}\), concatenate vector \(\mathbf{x}_2\). \([\cdot, \cdot]\) is a concatenation operator. The output coordinates will be the same as the input coordinates \(\mathcal{C}^\text{in} = \mathcal{C}^\text{out}\).
Note
The first argument takes a sparse tensor; the second argument takes features that are reduced to the origin. This can be typically done with the global reduction such as the
MinkowskiGlobalPooling
.-
forward
(input: MinkowskiSparseTensor.SparseTensor, input_glob: MinkowskiSparseTensor.SparseTensor)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiBroadcast.
MinkowskiBroadcastFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_out_feat)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, input_features: torch.Tensor, input_features_global: torch.Tensor, operation_type: MinkowskiEngineBackend._C.BroadcastMode, in_coords_key: MinkowskiEngineBackend._C.CoordinateMapKey, glob_coords_key: MinkowskiEngineBackend._C.CoordinateMapKey, coords_manager: MinkowskiCoordinateManager.CoordinateManager)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
-
class
MinkowskiEngine.MinkowskiBroadcast.
MinkowskiBroadcastMultiplication
¶ Bases:
MinkowskiEngine.MinkowskiBroadcast.MinkowskiBroadcastBase
Broadcast reduced features to all input coordinates.
\[\mathbf{y}_\mathbf{u} = \mathbf{x}_{1, \mathbf{u}} \times \mathbf{x}_2 \; \text{for} \; \mathbf{u} \in \mathcal{C}^\text{in}\]For all input \(\mathbf{x}_\mathbf{u}\), multiply \(\mathbf{x}_2\) element-wise. The output coordinates will be the same as the input coordinates \(\mathcal{C}^\text{in} = \mathcal{C}^\text{out}\).
Note
The first argument takes a sparse tensor; the second argument takes features that are reduced to the origin. This can be typically done with the global reduction such as the
MinkowskiGlobalPooling
.-
training
: bool¶
-
MinkowskiEngine.MinkowskiChannelwiseConvolution module¶
-
class
MinkowskiEngine.MinkowskiChannelwiseConvolution.
MinkowskiChannelwiseConvolution
(in_channels, kernel_size=- 1, stride=1, dilation=1, bias=False, kernel_generator=None, dimension=- 1)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
-
bias
¶
-
conv
¶
-
dimension
¶
-
forward
(input: MinkowskiSparseTensor.SparseTensor, coords: Optional[Union[torch.IntTensor, MinkowskiEngineBackend._C.CoordinateMapKey, MinkowskiSparseTensor.SparseTensor]] = None)¶ input
(MinkowskiEngine.SparseTensor): Input sparse tensor to apply a convolution on.coords
((torch.IntTensor, MinkowskiEngine.CoordinateMapKey, MinkowskiEngine.SparseTensor), optional): If provided, generate results on the provided coordinates. None by default.
-
in_channels
¶
-
kernel
¶
-
kernel_generator
¶
-
out_channels
¶
-
reset_parameters
(is_transpose=False)¶
-
training
: bool¶
-
MinkowskiEngine.MinkowskiCommon module¶
-
class
MinkowskiEngine.MinkowskiCommon.
MinkowskiModuleBase
¶ Bases:
torch.nn.modules.module.Module
-
training
: bool¶
-
-
MinkowskiEngine.MinkowskiCommon.
convert_to_int_list
(arg: Union[int, collections.abc.Sequence, numpy.ndarray, torch.Tensor], dimension: int)¶
-
MinkowskiEngine.MinkowskiCommon.
convert_to_int_tensor
(arg: Union[int, collections.abc.Sequence, numpy.ndarray, torch.IntTensor], dimension: int)¶
-
MinkowskiEngine.MinkowskiCommon.
get_minkowski_function
(name, variable)¶
-
MinkowskiEngine.MinkowskiCommon.
get_postfix
(tensor: torch.Tensor)¶
-
MinkowskiEngine.MinkowskiCommon.
prep_args
(tensor_stride: Union[int, collections.abc.Sequence, numpy.ndarray, torch.IntTensor], stride: Union[int, collections.abc.Sequence, numpy.ndarray, torch.IntTensor], kernel_size: Union[int, collections.abc.Sequence, numpy.ndarray, torch.IntTensor], dilation: Union[int, collections.abc.Sequence, numpy.ndarray, torch.IntTensor], region_type: Union[int, MinkowskiEngineBackend._C.RegionType], D=- 1)¶
MinkowskiEngine.MinkowskiConvolution module¶
-
class
MinkowskiEngine.MinkowskiConvolution.
MinkowskiConvolution
(in_channels, out_channels, kernel_size=-1, stride=1, dilation=1, bias=False, kernel_generator=None, expand_coordinates=False, convolution_mode=<ConvolutionMode.DEFAULT: 0>, dimension=None)¶ Bases:
MinkowskiEngine.MinkowskiConvolution.MinkowskiConvolutionBase
Convolution layer for a sparse tensor.
\[\mathbf{x}_\mathbf{u} = \sum_{\mathbf{i} \in \mathcal{N}^D(\mathbf{u}, K, \mathcal{C}^\text{in})} W_\mathbf{i} \mathbf{x}_{\mathbf{i} + \mathbf{u}} \;\text{for} \; \mathbf{u} \in \mathcal{C}^\text{out}\]where \(K\) is the kernel size and \(\mathcal{N}^D(\mathbf{u}, K, \mathcal{C}^\text{in})\) is the set of offsets that are at most \(\left \lceil{\frac{1}{2}(K - 1)} \right \rceil\) away from \(\mathbf{u}\) definied in \(\mathcal{S}^\text{in}\).
Note
For even \(K\), the kernel offset \(\mathcal{N}^D\) implementation is different from the above definition. The offsets range from \(\mathbf{i} \in [0, K)^D, \; \mathbf{i} \in \mathbb{Z}_+^D\).
-
bias
¶
-
conv
¶
-
dimension
¶
-
in_channels
¶
-
is_transpose
¶
-
kernel
¶
-
kernel_generator
¶
-
out_channels
¶
-
training
: bool¶
-
use_mm
¶
-
-
class
MinkowskiEngine.MinkowskiConvolution.
MinkowskiConvolutionBase
(in_channels, out_channels, kernel_size=-1, stride=1, dilation=1, bias=False, kernel_generator=None, is_transpose=False, expand_coordinates=False, convolution_mode=<ConvolutionMode.DEFAULT: 0>, dimension=-1)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
-
bias
¶
-
conv
¶
-
dimension
¶
-
forward
(input: MinkowskiSparseTensor.SparseTensor, coordinates: Optional[Union[torch.Tensor, MinkowskiEngineBackend._C.CoordinateMapKey, MinkowskiSparseTensor.SparseTensor]] = None)¶ input
(MinkowskiEngine.SparseTensor): Input sparse tensor to apply a convolution on.coordinates
((torch.IntTensor, MinkowskiEngine.CoordinateMapKey, MinkowskiEngine.SparseTensor), optional): If provided, generate results on the provided coordinates. None by default.
-
in_channels
¶
-
is_transpose
¶
-
kernel
¶
-
kernel_generator
¶
-
out_channels
¶
-
reset_parameters
(is_transpose=False)¶
-
training
: bool¶
-
use_mm
¶
-
-
class
MinkowskiEngine.MinkowskiConvolution.
MinkowskiConvolutionFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_out_feat: torch.Tensor)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, input_features: torch.Tensor, kernel_weights: torch.Tensor, kernel_generator: MinkowskiKernelGenerator.KernelGenerator, convolution_mode: MinkowskiEngineBackend._C.ConvolutionMode, in_coordinate_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, out_coordinate_map_key: Optional[MinkowskiEngineBackend._C.CoordinateMapKey] = None, coordinate_manager: Optional[MinkowskiCoordinateManager.CoordinateManager] = None)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
-
class
MinkowskiEngine.MinkowskiConvolution.
MinkowskiConvolutionTranspose
(in_channels, out_channels, kernel_size=-1, stride=1, dilation=1, bias=False, kernel_generator=None, expand_coordinates=False, convolution_mode=<ConvolutionMode.DEFAULT: 0>, dimension=None)¶ Bases:
MinkowskiEngine.MinkowskiConvolution.MinkowskiConvolutionBase
A generalized sparse transposed convolution or deconvolution layer.
-
bias
¶
-
conv
¶
-
dimension
¶
-
in_channels
¶
-
is_transpose
¶
-
kernel
¶
-
kernel_generator
¶
-
out_channels
¶
-
training
: bool¶
-
use_mm
¶
-
-
class
MinkowskiEngine.MinkowskiConvolution.
MinkowskiConvolutionTransposeFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_out_feat: torch.Tensor)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, input_features: torch.Tensor, kernel_weights: torch.Tensor, kernel_generator: MinkowskiKernelGenerator.KernelGenerator, convolution_mode: MinkowskiEngineBackend._C.ConvolutionMode, in_coordinate_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, out_coordinate_map_key: Optional[MinkowskiEngineBackend._C.CoordinateMapKey] = None, coordinate_manager: Optional[MinkowskiCoordinateManager.CoordinateManager] = None)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
-
class
MinkowskiEngine.MinkowskiConvolution.
MinkowskiGenerativeConvolutionTranspose
(in_channels, out_channels, kernel_size=-1, stride=1, dilation=1, bias=False, kernel_generator=None, convolution_mode=<ConvolutionMode.DEFAULT: 0>, dimension=None)¶ Bases:
MinkowskiEngine.MinkowskiConvolution.MinkowskiConvolutionBase
A generalized sparse transposed convolution or deconvolution layer that generates new coordinates.
-
bias
¶
-
conv
¶
-
dimension
¶
-
in_channels
¶
-
is_transpose
¶
-
kernel
¶
-
kernel_generator
¶
-
out_channels
¶
-
training
: bool¶
-
use_mm
¶
-
MinkowskiEngine.MinkowskiCoordinateManager module¶
-
class
MinkowskiEngine.MinkowskiCoordinateManager.
CoordinateManager
(D: int = 0, num_threads: int = - 1, coordinate_map_type: Optional[MinkowskiEngineBackend._C.CoordinateMapType] = None, allocator_type: Optional[MinkowskiEngineBackend._C.GPUMemoryAllocatorType] = None, minkowski_algorithm: Optional[MinkowskiEngineBackend._C.MinkowskiAlgorithm] = None)¶ Bases:
object
-
exists_field_to_sparse
(field_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, sparse_map_key: MinkowskiEngineBackend._C.CoordinateMapKey)¶
-
field_to_sparse_insert_and_map
(field_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, sparse_tensor_stride: Union[int, collections.abc.Sequence, numpy.ndarray], sparse_tensor_string_id: str = '') → Tuple[MinkowskiEngineBackend._C.CoordinateMapKey, Tuple[torch.IntTensor, torch.IntTensor]]¶ Create a sparse tensor coordinate map with the tensor stride.
field_map_key
(CoordinateMapKey): field map that a new sparse tensor will be created from.tensor_stride
(list): a list of D elements that defines the tensor stride for the new order-D + 1 sparse tensor.string_id
(str): string id of the new sparse tensor coordinate map key.Example:
>>> manager = CoordinateManager(D=1) >>> coordinates = torch.FloatTensor([[0, 0.1], [0, 2.3], [0, 1.2], [0, 2.4]]) >>> key, (unique_map, inverse_map) = manager.insert(coordinates, [1])
-
field_to_sparse_keys
(field_map_key: MinkowskiEngineBackend._C.CoordinateMapKey)¶
-
field_to_sparse_map
(field_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, sparse_map_key: MinkowskiEngineBackend._C.CoordinateMapKey)¶
-
get_coordinate_field
(coords_key_or_tensor_strides) → torch.Tensor¶
-
get_coordinates
(coords_key_or_tensor_strides) → torch.Tensor¶
-
get_field_to_sparse_map
(field_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, sparse_map_key: MinkowskiEngineBackend._C.CoordinateMapKey)¶
-
get_kernel_map
(in_key: MinkowskiEngineBackend._C.CoordinateMapKey, out_key: MinkowskiEngineBackend._C.CoordinateMapKey, stride=1, kernel_size=3, dilation=1, region_type=<RegionType.HYPER_CUBE: 0>, region_offset=None, is_transpose=False, is_pool=False) → dict¶ Alias of
CoordinateManager.kernel_map
. Will be deprecated in the next version.
-
get_unique_coordinate_map_key
(tensor_stride: Union[int, list]) → MinkowskiEngineBackend._C.CoordinateMapKey¶ Returns a unique coordinate_map_key for a given tensor stride.
tensor_stride
(list): a list of D elements that defines the tensor stride for the new order-D + 1 sparse tensor.
-
insert_and_map
(coordinates: torch.Tensor, tensor_stride: Union[int, collections.abc.Sequence, numpy.ndarray] = 1, string_id: str = '') → Tuple[MinkowskiEngineBackend._C.CoordinateMapKey, Tuple[torch.IntTensor, torch.IntTensor]]¶ create a new coordinate map and returns (key, (map, inverse_map)).
coordinates
: torch.Tensor (Int tensor. CUDA if coordinate_map_type == CoordinateMapType.GPU) that defines the coordinates.tensor_stride
(list): a list of D elements that defines the tensor stride for the new order-D + 1 sparse tensor.Example:
>>> manager = CoordinateManager(D=1) >>> coordinates = torch.IntTensor([[0, 0], [0, 0], [0, 1], [0, 2]]) >>> key, (unique_map, inverse_map) = manager.insert(coordinates, [1]) >>> print(key) # key is tensor_stride, string_id [1]:"" >>> torch.all(coordinates[unique_map] == manager.get_coordinates(key)) # True >>> torch.all(coordinates == coordinates[unique_map][inverse_map]) # True
-
insert_field
(coordinates: torch.Tensor, tensor_stride: collections.abc.Sequence, string_id: str = '') → Tuple[MinkowskiEngineBackend._C.CoordinateMapKey, Tuple[torch.IntTensor, torch.IntTensor]]¶ create a new coordinate map and returns
coordinates
: torch.FloatTensor (CUDA if coordinate_map_type == CoordinateMapType.GPU) that defines the coordinates.tensor_stride
(list): a list of D elements that defines the tensor stride for the new order-D + 1 sparse tensor.Example:
>>> manager = CoordinateManager(D=1) >>> coordinates = torch.FloatTensor([[0, 0.1], [0, 2.3], [0, 1.2], [0, 2.4]]) >>> key, (unique_map, inverse_map) = manager.insert(coordinates, [1]) >>> print(key) # key is tensor_stride, string_id [1]:"" >>> torch.all(coordinates[unique_map] == manager.get_coordinates(key)) # True >>> torch.all(coordinates == coordinates[unique_map][inverse_map]) # True
-
interpolation_map_weight
(key: MinkowskiEngineBackend._C.CoordinateMapKey, samples: torch.Tensor)¶
-
kernel_map
(in_key: MinkowskiEngineBackend._C.CoordinateMapKey, out_key: MinkowskiEngineBackend._C.CoordinateMapKey, stride=1, kernel_size=3, dilation=1, region_type=<RegionType.HYPER_CUBE: 0>, region_offset=None, is_transpose=False, is_pool=False) → dict¶ Get kernel in-out maps for the specified coords keys or tensor strides.
returns dict{kernel_index: in_out_tensor} where in_out_tensor[0] is the input row indices that correspond to in_out_tensor[1], which is the row indices for output.
-
number_of_unique_batch_indices
() → int¶
-
origin
() → MinkowskiEngineBackend._C.CoordinateMapKey¶
-
origin_field
() → MinkowskiEngineBackend._C.CoordinateMapKey¶
-
origin_field_map
(key: MinkowskiEngineBackend._C.CoordinateMapKey)¶
-
origin_map
(key: MinkowskiEngineBackend._C.CoordinateMapKey)¶
-
size
(coordinate_map_key: MinkowskiEngineBackend._C.CoordinateMapKey) → int¶
-
stride
(coordinate_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, stride: Union[int, collections.abc.Sequence, numpy.ndarray, torch.Tensor], string_id: str = '') → MinkowskiEngineBackend._C.CoordinateMapKey¶ Generate a new coordinate map and returns the key.
coordinate_map_key
(MinkowskiEngine.CoordinateMapKey
): input map to generate the strided map from.stride
: stride size.
-
stride_map
(in_key: MinkowskiEngineBackend._C.CoordinateMapKey, stride_key: MinkowskiEngineBackend._C.CoordinateMapKey)¶
-
union_map
(in_keys: list, out_key)¶
-
-
class
MinkowskiEngine.MinkowskiCoordinateManager.
CoordsManager
(**kwargs)¶ Bases:
object
-
MinkowskiEngine.MinkowskiCoordinateManager.
set_coordinate_map_type
(coordinate_map_type: MinkowskiEngineBackend._C.CoordinateMapType)¶ Set the default coordinate map type.
The MinkowskiEngine automatically set the coordinate_map_type to CUDA if a NVIDIA GPU is available. To control the
-
MinkowskiEngine.MinkowskiCoordinateManager.
set_gpu_allocator
(backend: MinkowskiEngineBackend._C.GPUMemoryAllocatorType)¶ Set the GPU memory allocator
By default, the Minkowski Engine will use the pytorch memory pool to allocate temporary GPU memory slots. This allows the pytorch backend to effectively reuse the memory pool shared between the pytorch backend and the Minkowski Engine. It tends to allow training with larger batch sizes given a fixed GPU memory. However, pytorch memory manager tend to be slower than allocating GPU directly using raw CUDA calls.
By default, the Minkowski Engine uses
ME.GPUMemoryAllocatorType.PYTORCH
for memory management.Example:
>>> import MinkowskiEngine as ME >>> # Set the GPU memory manager backend to raw CUDA calls >>> ME.set_gpu_allocator(ME.GPUMemoryAllocatorType.CUDA) >>> # Set the GPU memory manager backend to the pytorch c10 allocator >>> ME.set_gpu_allocator(ME.GPUMemoryAllocatorType.PYTORCH)
-
MinkowskiEngine.MinkowskiCoordinateManager.
set_memory_manager_backend
(backend: MinkowskiEngineBackend._C.GPUMemoryAllocatorType)¶ Alias for set_gpu_allocator. Deprecated and will be removed.
MinkowskiEngine.MinkowskiFunctional module¶
-
MinkowskiEngine.MinkowskiFunctional.
alpha_dropout
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
batch_norm
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
binary_cross_entropy
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
binary_cross_entropy_with_logits
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
celu
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
cross_entropy
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
dropout
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
elu
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
gelu
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
glu
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
gumbel_softmax
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
hardshrink
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
hardsigmoid
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
hardswish
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
hardtanh
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
hinge_embedding_loss
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
kl_div
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
l1_loss
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
leaky_relu
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
linear
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
log_softmax
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
logsigmoid
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
mse_loss
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
multi_margin_loss
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
multilabel_margin_loss
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
multilabel_soft_margin_loss
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
nll_loss
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
normalize
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
poisson_nll_loss
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
prelu
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
relu
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
relu6
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
rrelu
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
selu
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
sigmoid
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
silu
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
smooth_l1_loss
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
soft_margin_loss
(input, target, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
softmax
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
softmin
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
softplus
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
softshrink
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
softsign
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
tanh
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
tanhshrink
(input, *args, **kwargs)¶
-
MinkowskiEngine.MinkowskiFunctional.
threshold
(input, *args, **kwargs)¶
MinkowskiEngine.MinkowskiInterpolation module¶
-
class
MinkowskiEngine.MinkowskiInterpolation.
MinkowskiInterpolation
(return_kernel_map=False, return_weights=False)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
Sample linearly interpolated features at the provided points.
-
forward
(input: MinkowskiSparseTensor.SparseTensor, tfield: torch.Tensor)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiInterpolation.
MinkowskiInterpolationFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_out_feat=None, grad_in_map=None, grad_out_map=None, grad_weights=None)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, input_features: torch.Tensor, tfield: torch.Tensor, in_coordinate_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, coordinate_manager: Optional[MinkowskiCoordinateManager.CoordinateManager] = None)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
MinkowskiEngine.MinkowskiKernelGenerator module¶
-
class
MinkowskiEngine.MinkowskiKernelGenerator.
KernelGenerator
(kernel_size=-1, stride=1, dilation=1, is_transpose: bool = False, region_type: MinkowskiEngineBackend._C.RegionType = <RegionType.HYPER_CUBE: 0>, region_offsets: Optional[torch.Tensor] = None, expand_coordinates: bool = False, axis_types=None, dimension=-1)¶ Bases:
object
-
axis_types
¶
-
cache
¶
-
dimension
¶
-
expand_coordinates
¶
-
get_kernel
(tensor_stride, is_transpose)¶
-
kernel_dilation
¶
-
kernel_size
¶
-
kernel_stride
¶
-
kernel_volume
¶
-
region_offsets
¶
-
region_type
¶
-
requires_strided_coordinates
¶
-
-
class
MinkowskiEngine.MinkowskiKernelGenerator.
KernelRegion
(kernel_size, kernel_stride, kernel_dilation, region_type, offset, D)¶ Bases:
MinkowskiEngine.MinkowskiKernelGenerator.KernelRegion
adding functionality to a named tuple
-
MinkowskiEngine.MinkowskiKernelGenerator.
convert_region_type
(region_type: MinkowskiEngineBackend._C.RegionType, tensor_stride: Union[collections.abc.Sequence, numpy.ndarray, torch.IntTensor], kernel_size: Union[collections.abc.Sequence, numpy.ndarray, torch.IntTensor], up_stride: Union[collections.abc.Sequence, numpy.ndarray, torch.IntTensor], dilation: Union[collections.abc.Sequence, numpy.ndarray, torch.IntTensor], region_offset: Union[collections.abc.Sequence, numpy.ndarray, torch.IntTensor], axis_types: Union[collections.abc.Sequence, numpy.ndarray, torch.IntTensor], dimension: int, center: bool = True)¶ when center is True, the custom region_offset will be centered at the origin. Currently, for HYPER_CUBE, HYPER_CROSS with odd kernel sizes cannot use center=False.
up_stride: stride for conv_transpose, otherwise set it as 1
-
MinkowskiEngine.MinkowskiKernelGenerator.
get_kernel_volume
(region_type, kernel_size, region_offset, axis_types, dimension)¶ when center is True, the custom region_offset will be centered at the origin. Currently, for HYPER_CUBE, HYPER_CROSS with odd kernel sizes cannot use center=False.
-
MinkowskiEngine.MinkowskiKernelGenerator.
save_ctx
(ctx, kernel_generator: MinkowskiEngine.MinkowskiKernelGenerator.KernelGenerator, in_coords_key: MinkowskiEngineBackend._C.CoordinateMapKey, out_coords_key: MinkowskiEngineBackend._C.CoordinateMapKey, coordinate_manager: MinkowskiCoordinateManager.CoordinateManager)¶
MinkowskiEngine.MinkowskiNetwork module¶
-
class
MinkowskiEngine.MinkowskiNetwork.
MinkowskiNetwork
(D)¶ Bases:
torch.nn.modules.module.Module
,abc.ABC
MinkowskiNetwork: an abstract class for sparse convnets.
Note: All modules that use the same coordinates must use the same net_metadata
-
abstract
forward
(x)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
init
(x)¶ Initialize coordinates if it does not exist
-
training
: bool¶
-
abstract
MinkowskiEngine.MinkowskiNonlinearity module¶
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiAdaptiveLogSoftmaxWithLoss
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.adaptive.AdaptiveLogSoftmaxWithLoss
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiAlphaDropout
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.dropout.AlphaDropout
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiCELU
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.CELU
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiDropout
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.dropout.Dropout
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiELU
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.ELU
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiGELU
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.GELU
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiHardshrink
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Hardshrink
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiHardsigmoid
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Hardsigmoid
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiHardswish
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Hardswish
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiHardtanh
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Hardtanh
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiLeakyReLU
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.LeakyReLU
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiLogSigmoid
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.LogSigmoid
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiLogSoftmax
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.LogSoftmax
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiNonlinearityBase
(*args, **kwargs)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
-
MODULE
= None¶
-
forward
(input)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiPReLU
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.PReLU
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiRReLU
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.RReLU
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiReLU
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.ReLU
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiReLU6
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.ReLU6
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiSELU
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.SELU
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiSiLU
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.SiLU
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiSigmoid
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Sigmoid
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiSinusoidal
(in_channel, out_channel)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
-
forward
(input: Union[MinkowskiSparseTensor.SparseTensor, MinkowskiTensorField.TensorField])¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiSoftmax
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Softmax
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiSoftmin
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Softmin
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiSoftplus
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Softplus
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiSoftshrink
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Softshrink
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiSoftsign
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Softsign
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiTanh
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Tanh
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiTanhshrink
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Tanhshrink
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNonlinearity.
MinkowskiThreshold
(*args, **kwargs)¶ Bases:
MinkowskiEngine.MinkowskiNonlinearity.MinkowskiNonlinearityBase
-
MODULE
¶ alias of
torch.nn.modules.activation.Threshold
-
training
: bool¶
-
MinkowskiEngine.MinkowskiNormalization module¶
-
class
MinkowskiEngine.MinkowskiNormalization.
MinkowskiBatchNorm
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)¶ Bases:
torch.nn.modules.module.Module
A batch normalization layer for a sparse tensor.
See the pytorch
torch.nn.BatchNorm1d
for more details.-
forward
(input)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNormalization.
MinkowskiInstanceNorm
(num_features)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
A instance normalization layer for a sparse tensor.
-
forward
(input: MinkowskiSparseTensor.SparseTensor)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
reset_parameters
()¶
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNormalization.
MinkowskiInstanceNormFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, out_grad)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, in_feat: torch.Tensor, in_coords_key: MinkowskiEngineBackend._C.CoordinateMapKey, glob_coords_key: Optional[MinkowskiEngineBackend._C.CoordinateMapKey] = None, coords_manager: Optional[MinkowskiCoordinateManager.CoordinateManager] = None, gpooling_mode=<PoolingMode.GLOBAL_AVG_POOLING_KERNEL: 7>)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
-
class
MinkowskiEngine.MinkowskiNormalization.
MinkowskiStableInstanceNorm
(num_features)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
-
forward
(x)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
reset_parameters
()¶
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiNormalization.
MinkowskiSyncBatchNorm
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None)¶ Bases:
MinkowskiEngine.MinkowskiNormalization.MinkowskiBatchNorm
A batch normalization layer with multi GPU synchronization.
-
classmethod
convert_sync_batchnorm
(module, process_group=None)¶ Helper function to convert
MinkowskiEngine.MinkowskiBatchNorm
layer in the model toMinkowskiEngine.MinkowskiSyncBatchNorm
layer.- Args:
module (nn.Module): containing module process_group (optional): process group to scope synchronization, default is the whole world
- Returns:
The original module with the converted
MinkowskiEngine.MinkowskiSyncBatchNorm
layer
Example:
>>> # Network with MinkowskiBatchNorm layer >>> module = torch.nn.Sequential( >>> MinkowskiLinear(20, 100), >>> MinkowskiBatchNorm1d(100) >>> ).cuda() >>> # creating process group (optional) >>> # process_ids is a list of int identifying rank ids. >>> process_group = torch.distributed.new_group(process_ids) >>> sync_bn_module = convert_sync_batchnorm(module, process_group)
-
forward
(input)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
classmethod
MinkowskiEngine.MinkowskiOps module¶
-
class
MinkowskiEngine.MinkowskiOps.
MinkowskiLinear
(in_features, out_features, bias=True)¶ Bases:
torch.nn.modules.module.Module
-
forward
(input: Union[MinkowskiSparseTensor.SparseTensor, MinkowskiTensorField.TensorField])¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiOps.
MinkowskiStackCat
(*args: torch.nn.modules.module.Module)¶ -
class
MinkowskiEngine.MinkowskiOps.
MinkowskiStackCat
(arg: OrderedDict[str, Module]) Bases:
torch.nn.modules.container.Sequential
-
forward
(x)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiOps.
MinkowskiStackMean
(*args: torch.nn.modules.module.Module)¶ -
class
MinkowskiEngine.MinkowskiOps.
MinkowskiStackMean
(arg: OrderedDict[str, Module]) Bases:
torch.nn.modules.container.Sequential
-
forward
(x)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiOps.
MinkowskiStackSum
(*args: torch.nn.modules.module.Module)¶ -
class
MinkowskiEngine.MinkowskiOps.
MinkowskiStackSum
(arg: OrderedDict[str, Module]) Bases:
torch.nn.modules.container.Sequential
-
forward
(x)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiOps.
MinkowskiStackVar
(*args: torch.nn.modules.module.Module)¶ -
class
MinkowskiEngine.MinkowskiOps.
MinkowskiStackVar
(arg: OrderedDict[str, Module]) Bases:
torch.nn.modules.container.Sequential
-
forward
(x)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiOps.
MinkowskiToDenseTensor
(shape: Optional[torch.Size] = None)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
Converts a (differentiable) sparse tensor to a torch tensor.
The return type has the BxCxD1xD2x….xDN format.
Example:
>>> dense_tensor = torch.rand(3, 4, 11, 11, 11, 11) # BxCxD1xD2x....xDN >>> dense_tensor.requires_grad = True >>> # Since the shape is fixed, cache the coordinates for faster inference >>> coordinates = dense_coordinates(dense_tensor.shape) >>> network = nn.Sequential( >>> # Add layers that can be applied on a regular pytorch tensor >>> nn.ReLU(), >>> MinkowskiToSparseTensor(coordinates=coordinates), >>> MinkowskiConvolution(4, 5, stride=2, kernel_size=3, dimension=4), >>> MinkowskiBatchNorm(5), >>> MinkowskiReLU(), >>> MinkowskiConvolutionTranspose(5, 6, stride=2, kernel_size=3, dimension=4), >>> MinkowskiToDenseTensor( >>> dense_tensor.shape >>> ), # must have the same tensor stride. >>> ) >>> for i in range(5): >>> print(f"Iteration: {i}") >>> output = network(dense_tensor) # returns a regular pytorch tensor >>> output.sum().backward()
-
forward
(input: MinkowskiSparseTensor.SparseTensor)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiOps.
MinkowskiToFeature
¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
Extract features from a sparse tensor and returns a pytorch tensor.
Can be used to to make a network construction simpler.
Example:
>>> net = nn.Sequential(MinkowskiConvolution(...), MinkowskiGlobalMaxPooling(...), MinkowskiToFeature(), nn.Linear(...)) >>> torch_tensor = net(sparse_tensor)
-
forward
(x: MinkowskiSparseTensor.SparseTensor)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiOps.
MinkowskiToSparseTensor
(remove_zeros=True, coordinates: Optional[torch.Tensor] = None)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
Converts a (differentiable) dense tensor or a
MinkowskiEngine.TensorField
to aMinkowskiEngine.SparseTensor
.For dense tensor, the input must have the BxCxD1xD2x….xDN format.
remove_zeros
(bool): if True, removes zero valued coordinates. If False, use all coordinates to populate a sparse tensor. True by default.If the shape of the tensor do not change, use dense_coordinates to cache the coordinates. Please refer to tests/python/dense.py for usage.
Example:
>>> # Differentiable dense torch.Tensor to sparse tensor. >>> dense_tensor = torch.rand(3, 4, 11, 11, 11, 11) # BxCxD1xD2x....xDN >>> dense_tensor.requires_grad = True >>> # Since the shape is fixed, cache the coordinates for faster inference >>> coordinates = dense_coordinates(dense_tensor.shape) >>> network = nn.Sequential( >>> # Add layers that can be applied on a regular pytorch tensor >>> nn.ReLU(), >>> MinkowskiToSparseTensor(remove_zeros=False, coordinates=coordinates), >>> MinkowskiConvolution(4, 5, kernel_size=3, dimension=4), >>> MinkowskiBatchNorm(5), >>> MinkowskiReLU(), >>> ) >>> for i in range(5): >>> print(f"Iteration: {i}") >>> soutput = network(dense_tensor) >>> soutput.F.sum().backward() >>> soutput.dense(shape=dense_tensor.shape)
-
forward
(input: Union[MinkowskiTensorField.TensorField, torch.Tensor])¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
MinkowskiEngine.MinkowskiOps.
cat
(*sparse_tensors)¶ Concatenate sparse tensors
Concatenate sparse tensor features. All sparse tensors must have the same coordinate_map_key (the same coordinates). To concatenate sparse tensors with different sparsity patterns, use SparseTensor binary operations, or
MinkowskiEngine.MinkowskiUnion
.Example:
>>> import MinkowskiEngine as ME >>> sin = ME.SparseTensor(feats, coords) >>> sin2 = ME.SparseTensor(feats2, coordinate_map_key=sin.coordinate_map_key, coordinate_mananger=sin.coordinate_manager) >>> sout = UNet(sin) # Returns an output sparse tensor on the same coordinates >>> sout2 = ME.cat(sin, sin2, sout) # Can concatenate multiple sparse tensors
-
MinkowskiEngine.MinkowskiOps.
dense_coordinates
(shape: Union[list, torch.Size])¶ coordinates = dense_coordinates(tensor.shape)
-
MinkowskiEngine.MinkowskiOps.
mean
(*sparse_tensors)¶ Compute the average of sparse tensor features
Sum all sparse tensor features. All sparse tensors must have the same coordinate_map_key (the same coordinates). To sum sparse tensors with different sparsity patterns, use SparseTensor binary operations, or
MinkowskiEngine.MinkowskiUnion
.Example:
>>> import MinkowskiEngine as ME >>> sin = ME.SparseTensor(feats, coords) >>> sin2 = ME.SparseTensor(feats2, coordinate_map_key=sin.coordinate_map_key, coordinate_manager=sin.coordinate_manager) >>> sout = UNet(sin) # Returns an output sparse tensor on the same coordinates >>> sout2 = ME.mean(sin, sin2, sout) # Can concatenate multiple sparse tensors
-
MinkowskiEngine.MinkowskiOps.
to_sparse
(x: torch.Tensor, format: Optional[str] = None, coordinates=None, device=None)¶ Convert a batched tensor (dimension 0 is the batch dimension) to a SparseTensor
x
(torch.Tensor
): a batched tensor. The first dimension is the batch dimension.format
(str
): Format of the tensor. It must include ‘B’ and ‘C’ indicating the batch and channel dimension respectively. The rest of the dimensions must be ‘X’. .e.g. format=”BCXX” if image data with BCHW format is used. If a 3D data with the channel at the last dimension, use format=”BXXXC” indicating Batch X Height X Width X Depth X Channel. If not provided, the format will be “BCX…X”.device
: Device the sparse tensor will be generated on. If not provided, the device of the input tensor will be used.
-
MinkowskiEngine.MinkowskiOps.
to_sparse_all
(dense_tensor: torch.Tensor, coordinates: Optional[torch.Tensor] = None)¶ Converts a (differentiable) dense tensor to a sparse tensor with all coordinates.
Assume the input to have BxCxD1xD2x….xDN format.
If the shape of the tensor do not change, use dense_coordinates to cache the coordinates. Please refer to tests/python/dense.py for usage
Example:
>>> dense_tensor = torch.rand(3, 4, 5, 6, 7, 8) # BxCxD1xD2xD3xD4 >>> dense_tensor.requires_grad = True >>> stensor = to_sparse(dense_tensor)
-
MinkowskiEngine.MinkowskiOps.
var
(*sparse_tensors)¶ Compute the variance of sparse tensor features
Sum all sparse tensor features. All sparse tensors must have the same coordinate_map_key (the same coordinates). To sum sparse tensors with different sparsity patterns, use SparseTensor binary operations, or
MinkowskiEngine.MinkowskiUnion
.Example:
>>> import MinkowskiEngine as ME >>> sin = ME.SparseTensor(feats, coords) >>> sin2 = ME.SparseTensor(feats2, coordinate_map_key=sin.coordinate_map_key, coordinate_manager=sin.coordinate_manager) >>> sout = UNet(sin) # Returns an output sparse tensor on the same coordinates >>> sout2 = ME.var(sin, sin2, sout) # Can concatenate multiple sparse tensors
MinkowskiEngine.MinkowskiPooling module¶
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiAvgPooling
(kernel_size=- 1, stride=1, dilation=1, kernel_generator=None, dimension=None)¶ Bases:
MinkowskiEngine.MinkowskiPooling.MinkowskiPoolingBase
Average input features within a kernel.
\[\mathbf{y}_\mathbf{u} = \frac{1}{|\mathcal{N}^D(\mathbf{u}, \mathcal{C}^\text{in})|} \sum_{\mathbf{i} \in \mathcal{N}^D(\mathbf{u}, \mathcal{C}^\text{in})} \mathbf{x}_{\mathbf{u} + \mathbf{i}} \; \text{for} \; \mathbf{u} \in \mathcal{C}^\text{out}\]For each output \(\mathbf{u}\) in \(\mathcal{C}^\text{out}\), average input features.
Note
An average layer first computes the cardinality of the input features, the number of input features for each output, and divide the sum of the input features by the cardinality. For a dense tensor, the cardinality is a constant, the volume of a kernel. However, for a sparse tensor, the cardinality varies depending on the number of input features per output. Thus, the average pooling for a sparse tensor is not equivalent to the conventional average pooling layer for a dense tensor. Please refer to the
MinkowskiSumPooling
for the equivalent layer.Note
The engine will generate the in-out mapping corresponding to a pooling function faster if the kernel sizes is equal to the stride sizes, e.g. kernel_size = [2, 1], stride = [2, 1].
If you use a U-network architecture, use the transposed version of the same function for up-sampling. e.g. pool = MinkowskiSumPooling(kernel_size=2, stride=2, D=D), then use the unpool = MinkowskiPoolingTranspose(kernel_size=2, stride=2, D=D).
-
dimension
¶
-
is_transpose
¶
-
kernel_generator
¶
-
pooling
¶
-
pooling_mode
¶
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiDirectMaxPoolingFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_out_feat)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, in_map: torch.Tensor, out_map: torch.Tensor, in_feat: torch.Tensor, out_nrows: int, is_sorted: bool = False)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiGlobalAvgPooling
(mode=<PoolingMode.GLOBAL_AVG_POOLING_PYTORCH_INDEX: 10>)¶ Bases:
MinkowskiEngine.MinkowskiPooling.MinkowskiGlobalPooling
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiGlobalMaxPooling
(mode=<PoolingMode.GLOBAL_MAX_POOLING_PYTORCH_INDEX: 11>)¶ Bases:
MinkowskiEngine.MinkowskiPooling.MinkowskiGlobalPooling
Max pool all input features to one output feature at the origin.
\[\mathbf{y} = \max_{\mathbf{i} \in \mathcal{C}^\text{in}} \mathbf{x}_{\mathbf{i}}\]-
forward
(input, coordinates: Optional[Union[torch.IntTensor, MinkowskiEngineBackend._C.CoordinateMapKey, MinkowskiSparseTensor.SparseTensor]] = None)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiGlobalPooling
(mode: MinkowskiEngineBackend._C.PoolingMode = <PoolingMode.GLOBAL_AVG_POOLING_PYTORCH_INDEX: 10>)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
Pool all input features to one output.
-
forward
(input: MinkowskiSparseTensor.SparseTensor, coordinates: Optional[Union[torch.IntTensor, MinkowskiEngineBackend._C.CoordinateMapKey, MinkowskiSparseTensor.SparseTensor]] = None)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiGlobalPoolingFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_out_feat)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, input_features: torch.Tensor, pooling_mode: MinkowskiEngineBackend._C.PoolingMode, in_coordinate_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, out_coordinate_map_key: Optional[MinkowskiEngineBackend._C.CoordinateMapKey] = None, coordinate_manager: Optional[MinkowskiCoordinateManager.CoordinateManager] = None)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiGlobalSumPooling
(mode=<PoolingMode.GLOBAL_SUM_POOLING_PYTORCH_INDEX: 9>)¶ Bases:
MinkowskiEngine.MinkowskiPooling.MinkowskiGlobalPooling
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiLocalPoolingFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_out_feat)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, input_features: torch.Tensor, pooling_mode: MinkowskiEngineBackend._C.PoolingMode, kernel_generator: MinkowskiKernelGenerator.KernelGenerator, in_coordinate_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, out_coordinate_map_key: Optional[MinkowskiEngineBackend._C.CoordinateMapKey] = None, coordinate_manager: Optional[MinkowskiCoordinateManager.CoordinateManager] = None)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiLocalPoolingTransposeFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_out_feat)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, input_features: torch.Tensor, pooling_mode: MinkowskiEngineBackend._C.PoolingMode, kernel_generator: MinkowskiKernelGenerator.KernelGenerator, in_coordinate_map_key: MinkowskiEngineBackend._C.CoordinateMapKey, out_coordinate_map_key: Optional[MinkowskiEngineBackend._C.CoordinateMapKey] = None, coordinate_manager: Optional[MinkowskiCoordinateManager.CoordinateManager] = None)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiMaxPooling
(kernel_size, stride=1, dilation=1, kernel_generator=None, dimension=None)¶ Bases:
MinkowskiEngine.MinkowskiPooling.MinkowskiPoolingBase
A max pooling layer for a sparse tensor.
\[y^c_\mathbf{u} = \max_{\mathbf{i} \in \mathcal{N}^D(\mathbf{u}, \mathcal{C}^\text{in})} x^c_{\mathbf{u} + \mathbf{i}} \; \text{for} \; \mathbf{u} \in \mathcal{C}^\text{out}\]where \(y^c_\mathbf{u}\) is a feature at channel \(c\) and a coordinate \(\mathbf{u}\).
Note
The engine will generate the in-out mapping corresponding to a pooling function faster if the kernel sizes is equal to the stride sizes, e.g. kernel_size = [2, 1], stride = [2, 1].
If you use a U-network architecture, use the transposed version of the same function for up-sampling. e.g. pool = MinkowskiSumPooling(kernel_size=2, stride=2, D=D), then use the unpool = MinkowskiPoolingTranspose(kernel_size=2, stride=2, D=D).
-
dimension
¶
-
is_transpose
¶
-
kernel_generator
¶
-
pooling
¶
-
pooling_mode
¶
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiPoolingBase
(kernel_size, stride=1, dilation=1, kernel_generator=None, is_transpose=False, pooling_mode=<PoolingMode.LOCAL_AVG_POOLING: 1>, dimension=-1)¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
-
dimension
¶
-
forward
(input: MinkowskiSparseTensor.SparseTensor, coordinates: Optional[Union[torch.IntTensor, MinkowskiEngineBackend._C.CoordinateMapKey, MinkowskiSparseTensor.SparseTensor]] = None)¶ input
(MinkowskiEngine.SparseTensor): Input sparse tensor to apply a convolution on.coordinates
((torch.IntTensor, MinkowskiEngine.CoordsKey, MinkowskiEngine.SparseTensor), optional): If provided, generate results on the provided coordinates. None by default.
-
is_transpose
¶
-
kernel_generator
¶
-
pooling
¶
-
pooling_mode
¶
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiPoolingTranspose
(kernel_size, stride, dilation=1, kernel_generator=None, expand_coordinates=False, dimension=None)¶ Bases:
MinkowskiEngine.MinkowskiPooling.MinkowskiPoolingBase
A pooling transpose layer for a sparse tensor.
Unpool the features and divide it by the number of non zero elements that contributed.
-
dimension
¶
-
is_transpose
¶
-
kernel_generator
¶
-
pooling
¶
-
pooling_mode
¶
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiPooling.
MinkowskiSumPooling
(kernel_size, stride=1, dilation=1, kernel_generator=None, dimension=None)¶ Bases:
MinkowskiEngine.MinkowskiPooling.MinkowskiPoolingBase
Sum all input features within a kernel.
\[\mathbf{y}_\mathbf{u} = \sum_{\mathbf{i} \in \mathcal{N}^D(\mathbf{u}, \mathcal{C}^\text{in})} \mathbf{x}_{\mathbf{u} + \mathbf{i}} \; \text{for} \; \mathbf{u} \in \mathcal{C}^\text{out}\]For each output \(\mathbf{u}\) in \(\mathcal{C}^\text{out}\), average input features.
Note
An average layer first computes the cardinality of the input features, the number of input features for each output, and divide the sum of the input features by the cardinality. For a dense tensor, the cardinality is a constant, the volume of a kernel. However, for a sparse tensor, the cardinality varies depending on the number of input features per output. Thus, averaging the input features with the cardinality may not be equivalent to the conventional average pooling for a dense tensor. This layer provides an alternative that does not divide the sum by the cardinality.
Note
The engine will generate the in-out mapping corresponding to a pooling function faster if the kernel sizes is equal to the stride sizes, e.g. kernel_size = [2, 1], stride = [2, 1].
If you use a U-network architecture, use the transposed version of the same function for up-sampling. e.g. pool = MinkowskiSumPooling(kernel_size=2, stride=2, D=D), then use the unpool = MinkowskiPoolingTranspose(kernel_size=2, stride=2, D=D).
-
dimension
¶
-
is_transpose
¶
-
kernel_generator
¶
-
pooling
¶
-
pooling_mode
¶
-
training
: bool¶
-
MinkowskiEngine.MinkowskiPruning module¶
-
class
MinkowskiEngine.MinkowskiPruning.
MinkowskiPruning
¶ Bases:
MinkowskiCommon.MinkowskiModuleBase
Remove specified coordinates from a
MinkowskiEngine.SparseTensor
.-
forward
(input: MinkowskiSparseTensor.SparseTensor, mask: torch.Tensor)¶ - Args:
input
(MinkowskiEnigne.SparseTensor
): a sparse tensor to remove coordinates from.mask
(torch.BoolTensor
): mask vector that specifies which one to keep. Coordinates with False will be removed.- Returns:
A
MinkowskiEngine.SparseTensor
with C = coordinates corresponding to mask == True F = copy of the features from mask == True.
Example:
>>> # Define inputs >>> input = SparseTensor(feats, coords=coords) >>> # Any boolean tensor can be used as the filter >>> mask = torch.rand(feats.size(0)) < 0.5 >>> pruning = MinkowskiPruning() >>> output = pruning(input, mask)
-
training
: bool¶
-
-
class
MinkowskiEngine.MinkowskiPruning.
MinkowskiPruningFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_out_feat: torch.Tensor)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, in_feat: torch.Tensor, mask: torch.Tensor, in_coords_key: MinkowskiEngineBackend._C.CoordinateMapKey, out_coords_key: Optional[MinkowskiEngineBackend._C.CoordinateMapKey] = None, coords_manager: Optional[MinkowskiCoordinateManager.CoordinateManager] = None)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
MinkowskiEngine.MinkowskiSparseTensor module¶
-
class
MinkowskiEngine.MinkowskiSparseTensor.
SparseTensor
(features: torch.Tensor, coordinates: Optional[torch.Tensor] = None, tensor_stride: Union[int, collections.abc.Sequence, numpy.ndarray, torch.IntTensor] = 1, coordinate_map_key: Optional[MinkowskiEngineBackend._C.CoordinateMapKey] = None, coordinate_manager: Optional[MinkowskiCoordinateManager.CoordinateManager] = None, quantization_mode: MinkowskiTensor.SparseTensorQuantizationMode = <SparseTensorQuantizationMode.RANDOM_SUBSAMPLE: 0>, allocator_type: Optional[MinkowskiEngineBackend._C.GPUMemoryAllocatorType] = None, minkowski_algorithm: Optional[MinkowskiEngineBackend._C.MinkowskiAlgorithm] = None, requires_grad=None, device=None)¶ Bases:
MinkowskiTensor.Tensor
A sparse tensor class. Can be accessed via
MinkowskiEngine.SparseTensor
.The
SparseTensor
class is the basic tensor in MinkowskiEngine. For the definition of a sparse tensor, please visit the terminology page. We use the COOrdinate (COO) format to save a sparse tensor [1]. This representation is simply a concatenation of coordinates in a matrix \(C\) and associated features \(F\).\[\begin{split}\mathbf{C} = \begin{bmatrix} b_1 & x_1^1 & x_1^2 & \cdots & x_1^D \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ b_N & x_N^1 & x_N^2 & \cdots & x_N^D \end{bmatrix}, \; \mathbf{F} = \begin{bmatrix} \mathbf{f}_1^T\\ \vdots\\ \mathbf{f}_N^T \end{bmatrix}\end{split}\]where \(\mathbf{x}_i \in \mathcal{Z}^D\) is a \(D\)-dimensional coordinate and \(b_i \in \mathcal{Z}_+\) denotes the corresponding batch index. \(N\) is the number of non-zero elements in the sparse tensor, each with the coordinate \((b_i, x_i^1, x_i^1, \cdots, x_i^D)\), and the associated feature \(\mathbf{f}_i\). Internally, we handle the batch index as an additional spatial dimension.
Example:
>>> coords, feats = ME.utils.sparse_collate([coords_batch0, coords_batch1], [feats_batch0, feats_batch1]) >>> A = ME.SparseTensor(features=feats, coordinates=coords) >>> B = ME.SparseTensor(features=feats, coordinate_map_key=A.coordiante_map_key, coordinate_manager=A.coordinate_manager) >>> C = ME.SparseTensor(features=feats, coordinates=coords, quantization_mode=ME.SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE) >>> D = ME.SparseTensor(features=feats, coordinates=coords, quantization_mode=ME.SparseTensorQuantizationMode.RANDOM_SUBSAMPLE) >>> E = ME.SparseTensor(features=feats, coordinates=coords, tensor_stride=2)
Warning
To use the GPU-backend for coordinate management, the
coordinates
must be a torch tensor on GPU. Applying to(device) afterMinkowskiEngine.SparseTensor
initialization with a CPU coordinates will waste time and computation on creating an unnecessary CPU CoordinateMap since the GPU CoordinateMap will be created from scratch as well.Warning
Before MinkowskiEngine version 0.4, we put the batch indices on the last column. Thus, direct manipulation of coordinates will be incompatible with the latest versions. Instead, please use
MinkowskiEngine.utils.batched_coordinates
orMinkowskiEngine.utils.sparse_collate
to create batched coordinates.Also, to access coordinates or features batch-wise, use the functions
coordinates_at(batch_index : int)
,features_at(batch_index : int)
of a sparse tensor. Or to access all batch-wise coordinates and features, decomposed_coordinates, decomposed_features, decomposed_coordinates_and_features of a sparse tensor.Example:
>>> coords, feats = ME.utils.sparse_collate([coords_batch0, coords_batch1], [feats_batch0, feats_batch1]) >>> A = ME.SparseTensor(feats=feats, coords=coords) >>> coords_batch0 = A.coordinates_at(batch_index=0) >>> feats_batch1 = A.features_at(batch_index=1) >>> list_of_coords, list_of_featurs = A.decomposed_coordinates_and_features
-
cat_slice
(X)¶ - Args:
X
(MinkowskiEngine.SparseTensor
): a sparse tensor that discretized the original input.- Returns:
tensor_field
(MinkowskiEngine.TensorField
): the resulting tensor field contains the concatenation of features on the original continuous coordinates that generated the input X and the self.
Example:
>>> # coords, feats from a data loader >>> print(len(coords)) # 227742 >>> sinput = ME.SparseTensor(coordinates=coords, features=feats, quantization_mode=SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE) >>> print(len(sinput)) # 161890 quantization results in fewer voxels >>> soutput = network(sinput) >>> print(len(soutput)) # 161890 Output with the same resolution >>> ofield = soutput.cat_slice(sinput) >>> assert soutput.F.size(1) + sinput.F.size(1) == ofield.F.size(1) # concatenation of features
-
coordinate_map_key
¶
-
dense
(shape=None, min_coordinate=None, contract_stride=True)¶ Convert the
MinkowskiEngine.SparseTensor
to a torch dense tensor.- Args:
shape
(torch.Size, optional): The size of the output tensor.min_coordinate
(torch.IntTensor, optional): The min coordinates of the output sparse tensor. Must be divisible by the currenttensor_stride
. If 0 is given, it will use the origin for the min coordinate.contract_stride
(bool, optional): The output coordinates will be divided by the tensor stride to make features spatially contiguous. True by default.- Returns:
tensor
(torch.Tensor): the torch tensor with size [Batch Dim, Feature Dim, Spatial Dim…, Spatial Dim]. The coordinate of each feature can be accessed via min_coordinate + tensor_stride * [the coordinate of the dense tensor].min_coordinate
(torch.IntTensor): the D-dimensional vector defining the minimum coordinate of the output tensor.tensor_stride
(torch.IntTensor): the D-dimensional vector defining the stride between tensor elements.
-
features_at_coordinates
(query_coordinates: torch.Tensor)¶ Extract features at the specified continuous coordinate matrix.
- Args:
query_coordinates
(torch.FloatTensor
): a coordinate matrix of size \(N \times (D + 1)\) where \(D\) is the size of the spatial dimension.- Returns:
queried_features
(torch.Tensor
): a feature matrix of size \(N \times D_F\) where \(D_F\) is the number of channels in the feature. For coordinates not present in the current sparse tensor, corresponding feature rows will be zeros.
-
initialize_coordinates
(coordinates, features, coordinate_map_key)¶
-
inverse_mapping
¶
-
quantization_mode
¶
-
slice
(X)¶ - Args:
X
(MinkowskiEngine.SparseTensor
): a sparse tensor that discretized the original input.- Returns:
tensor_field
(MinkowskiEngine.TensorField
): the resulting tensor field contains features on the continuous coordinates that generated the input X.
Example:
>>> # coords, feats from a data loader >>> print(len(coords)) # 227742 >>> tfield = ME.TensorField(coordinates=coords, features=feats, quantization_mode=SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE) >>> print(len(tfield)) # 227742 >>> sinput = tfield.sparse() # 161890 quantization results in fewer voxels >>> soutput = MinkUNet(sinput) >>> print(len(soutput)) # 161890 Output with the same resolution >>> ofield = soutput.slice(tfield) >>> assert isinstance(ofield, ME.TensorField) >>> len(ofield) == len(coords) # recovers the original ordering and length >>> assert isinstance(ofield.F, torch.Tensor) # .F returns the features
-
sparse
(min_coords=None, max_coords=None, contract_coords=True)¶ Convert the
MinkowskiEngine.SparseTensor
to a torch sparse tensor.- Args:
min_coords
(torch.IntTensor, optional): The min coordinates of the output sparse tensor. Must be divisible by the currenttensor_stride
.max_coords
(torch.IntTensor, optional): The max coordinates of the output sparse tensor (inclusive). Must be divisible by the currenttensor_stride
.contract_coords
(bool, optional): Given True, the output coordinates will be divided by the tensor stride to make features contiguous.- Returns:
spare_tensor
(torch.sparse.Tensor): the torch sparse tensor representation of the self in [Batch Dim, Spatial Dims…, Feature Dim]. The coordinate of each feature can be accessed via min_coord + tensor_stride * [the coordinate of the dense tensor].min_coords
(torch.IntTensor): the D-dimensional vector defining the minimum coordinate of the output sparse tensor. Ifcontract_coords
is True, themin_coords
will also be contracted.tensor_stride
(torch.IntTensor): the D-dimensional vector defining the stride between tensor elements.
-
unique_index
¶
-
MinkowskiEngine.MinkowskiTensor module¶
-
class
MinkowskiEngine.MinkowskiTensor.
SparseTensorOperationMode
(value)¶ Bases:
enum.Enum
Enum class for SparseTensor internal instantiation modes.
SEPARATE_COORDINATE_MANAGER
: always create a new coordinate manager.SHARE_COORDINATE_MANAGER
: always use the globally defined coordinate manager. Must clear the coordinate manager manually byMinkowskiEngine.SparseTensor.clear_global_coordinate_manager
.-
SEPARATE_COORDINATE_MANAGER
= 0¶
-
SHARE_COORDINATE_MANAGER
= 1¶
-
-
class
MinkowskiEngine.MinkowskiTensor.
SparseTensorQuantizationMode
(value)¶ Bases:
enum.Enum
RANDOM_SUBSAMPLE: Subsample one coordinate per each quantization block randomly. UNWEIGHTED_AVERAGE: average all features within a quantization block equally. UNWEIGHTED_SUM: sum all features within a quantization block equally. NO_QUANTIZATION: No quantization is applied. Should not be used for normal operation.
-
MAX_POOL
= 4¶
-
NO_QUANTIZATION
= 3¶
-
RANDOM_SUBSAMPLE
= 0¶
-
UNWEIGHTED_AVERAGE
= 1¶
-
UNWEIGHTED_SUM
= 2¶
-
-
class
MinkowskiEngine.MinkowskiTensor.
Tensor
¶ Bases:
object
A sparse tensor class. Can be accessed via
MinkowskiEngine.SparseTensor
.The
SparseTensor
class is the basic tensor in MinkowskiEngine. For the definition of a sparse tensor, please visit the terminology page. We use the COOrdinate (COO) format to save a sparse tensor [1]. This representation is simply a concatenation of coordinates in a matrix \(C\) and associated features \(F\).\[\begin{split}\mathbf{C} = \begin{bmatrix} b_1 & x_1^1 & x_1^2 & \cdots & x_1^D \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ b_N & x_N^1 & x_N^2 & \cdots & x_N^D \end{bmatrix}, \; \mathbf{F} = \begin{bmatrix} \mathbf{f}_1^T\\ \vdots\\ \mathbf{f}_N^T \end{bmatrix}\end{split}\]where \(\mathbf{x}_i \in \mathcal{Z}^D\) is a \(D\)-dimensional coordinate and \(b_i \in \mathcal{Z}_+\) denotes the corresponding batch index. \(N\) is the number of non-zero elements in the sparse tensor, each with the coordinate \((b_i, x_i^1, x_i^1, \cdots, x_i^D)\), and the associated feature \(\mathbf{f}_i\). Internally, we handle the batch index as an additional spatial dimension.
Example:
>>> coords, feats = ME.utils.sparse_collate([coords_batch0, coords_batch1], [feats_batch0, feats_batch1]) >>> A = ME.SparseTensor(features=feats, coordinates=coords) >>> B = ME.SparseTensor(features=feats, coordinate_map_key=A.coordiante_map_key, coordinate_manager=A.coordinate_manager) >>> C = ME.SparseTensor(features=feats, coordinates=coords, quantization_mode=ME.SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE) >>> D = ME.SparseTensor(features=feats, coordinates=coords, tensor_stride=2)
Warning
To use the GPU-backend for coordinate management, the
coordinates
must be a torch tensor on GPU. Applying to(device) after aMinkowskiEngine.SparseTensor
initialization with a CPU coordinates will waste time and computation for creating a CPU CoordinateMap since GPU CoordinateMap will be created from scratch.Warning
Before MinkowskiEngine version 0.4, we put the batch indices on the last column. Thus, direct manipulation of coordinates will be incompatible with the latest versions. Instead, please use
MinkowskiEngine.utils.batched_coordinates
orMinkowskiEngine.utils.sparse_collate
to create batched coordinates.Also, to access coordinates or features batch-wise, use the functions
coordinates_at(batch_index : int)
,features_at(batch_index : int)
of a sparse tensor. Or to access all batch-wise coordinates and features, decomposed_coordinates, decomposed_features, decomposed_coordinates_and_features of a sparse tensor.Example:
>>> coords, feats = ME.utils.sparse_collate([coords_batch0, coords_batch1], [feats_batch0, feats_batch1]) >>> A = ME.SparseTensor(feats=feats, coords=coords) >>> coords_batch0 = A.coordinates_at(batch_index=0) >>> feats_batch1 = A.features_at(batch_index=1) >>> list_of_coords, list_of_featurs = A.decomposed_coordinates_and_features
-
property
C
¶ The alias of
coords
.
-
property
D
¶ Alias of attr:D
-
property
F
¶ The alias of
feats
.
-
property
coordinate_manager
¶
-
coordinate_map_key
¶
-
property
coordinates
¶ The coordinates of the current sparse tensor. The coordinates are represented as a \(N \times (D + 1)\) dimensional matrix where \(N\) is the number of points in the space and \(D\) is the dimension of the space (e.g. 3 for 3D, 4 for 3D + Time). Additional dimension of the column of the matrix C is for batch indices which is internally treated as an additional spatial dimension to disassociate different instances in a batch.
-
coordinates_and_features_at
(batch_index)¶ Returns a coordinate and feature matrix at the specified batch index.
Returns a coordinate and feature matrix at the specified batch_index. The coordinate matrix is a torch.IntTensor \(C \in \mathcal{R}^{N \times D}\) where \(N\) is the number of non zero elements in the specified batch index in \(D\) dimensional space. The feature matrix is a torch.Tensor \(C \in \mathcal{R}^{N \times N_F}\) matrix \(N\) is the number of non zero elements in the specified batch index and \(N_F\) is the number of channels.
Note
The order of features is non-deterministic within each batch. To retrieve the order the decomposed features is generated, use
decomposition_permutations
.
-
coordinates_at
(batch_index)¶ Return coordinates at the specified batch index.
Returns a torch.IntTensor \(C \in \mathcal{R}^{N_i \times D}\) coordinates at the specified batch index where \(N_i\) is the number of non zero elements in the \(i\) dimensional space.
Note
The order of coordinates is non-deterministic within each batch. Use
decomposed_coordinates_and_features
to retrieve both coordinates features with the same order. To retrieve the order the decomposed coordinates is generated, usedecomposition_permutations
.
-
property
decomposed_coordinates
¶ Returns a list of coordinates per batch.
Returns a list of torch.IntTensor \(C \in \mathcal{R}^{N_i \times D}\) coordinates per batch where \(N_i\) is the number of non zero elements in the \(i\) dimensional space.
Note
The order of coordinates is non-deterministic within each batch. Use
decomposed_coordinates_and_features
to retrieve both coordinates features with the same order. To retrieve the order the decomposed coordinates is generated, usedecomposition_permutations
.
-
property
decomposed_coordinates_and_features
¶ Returns a list of coordinates and a list of features per batch.abs
Note
The order of decomposed coordinates and features is non-deterministic within each batch. To retrieve the order the decomposed features is generated, use
decomposition_permutations
.
-
property
decomposed_features
¶ Returns a list of features per batch.
Returns a list of torch.Tensor \(C \in \mathcal{R}^{N_i \times N_F}\) features per batch where \(N_i\) is the number of non zero elements in the \(i\) dimensional space.
Note
The order of features is non-deterministic within each batch. Use
decomposed_coordinates_and_features
to retrieve both coordinates features with the same order. To retrieve the order the decomposed features is generated, usedecomposition_permutations
.
-
property
decomposition_permutations
¶ Returns a list of indices per batch that where indices defines the permutation of the batch-wise decomposition.
Example:
>>> # coords, feats, labels are given. All follow the same order >>> stensor = ME.SparseTensor(feats, coords) >>> conv = ME.MinkowskiConvolution(in_channels=3, out_nchannel=3, kernel_size=3, dimension=3) >>> list_of_featurs = stensor.decomposed_features >>> list_of_permutations = stensor.decomposition_permutations >>> # list_of_features == [feats[inds] for inds in list_of_permutations] >>> list_of_decomposed_labels = [labels[inds] for inds in list_of_permutations] >>> for curr_feats, curr_labels in zip(list_of_features, list_of_decomposed_labels): >>> loss += torch.functional.mse_loss(curr_feats, curr_labels)
-
detach
()¶
-
property
device
¶
-
property
dimension
¶ Alias of attr:D
-
double
()¶
-
property
dtype
¶
-
property
features
¶ The features of the current sparse tensor. The features are \(N \times D_F\) where \(N\) is the number of points in the space and \(D_F\) is the dimension of each feature vector. Please refer to
coords
to access the associated coordinates.
-
features_at
(batch_index)¶ Returns a feature matrix at the specified batch index.
Returns a torch.Tensor \(C \in \mathcal{R}^{N \times N_F}\) feature matrix \(N\) is the number of non zero elements in the specified batch index and \(N_F\) is the number of channels.
Note
The order of features is non-deterministic within each batch. Use
decomposed_coordinates_and_features
to retrieve both coordinates features with the same order. To retrieve the order the decomposed features is generated, usedecomposition_permutations
.
-
float
()¶
-
get_device
()¶
-
inverse_mapping
¶
-
quantization_mode
¶
-
property
requires_grad
¶
-
requires_grad_
(requires_grad: bool = True)¶
-
property
shape
¶
-
size
()¶
-
property
tensor_stride
¶
-
unique_index
¶
-
property
-
MinkowskiEngine.MinkowskiTensor.
clear_global_coordinate_manager
()¶ Clear the global coordinate manager cache.
When you use the operation mode:
MinkowskiEngine.SparseTensor.SparseTensorOperationMode.SHARE_COORDINATE_MANAGER
, you must explicitly clear the coordinate manager after each feed forward/backward.
-
MinkowskiEngine.MinkowskiTensor.
global_coordinate_manager
()¶ Return the current global coordinate manager
-
MinkowskiEngine.MinkowskiTensor.
set_global_coordinate_manager
(coordinate_manager)¶ Set the global coordinate manager.
MinkowskiEngine.CoordinateManager
The coordinate manager which will be set to the global coordinate manager.
-
MinkowskiEngine.MinkowskiTensor.
set_sparse_tensor_operation_mode
(operation_mode: MinkowskiEngine.MinkowskiTensor.SparseTensorOperationMode)¶ Define the sparse tensor coordinate manager operation mode.
By default, a
MinkowskiEngine.SparseTensor.SparseTensor
instantiation creates a new coordinate manager that is not shared with other sparse tensors. By setting this function withMinkowskiEngine.SparseTensorOperationMode.SHARE_COORDINATE_MANAGER
, you can share the coordinate manager globally with other sparse tensors. However, you must explicitly clear the coordinate manger after use. Please refer toMinkowskiEngine.clear_global_coordinate_manager
.- Args:
operation_mode
(MinkowskiEngine.SparseTensorOperationMode
): The operation mode for the sparse tensor coordinate manager. By defaultMinkowskiEngine.SparseTensorOperationMode.SEPARATE_COORDINATE_MANAGER
.
Example:
>>> import MinkowskiEngine as ME >>> ME.set_sparse_tensor_operation_mode(ME.SparseTensorOperationMode.SHARE_COORDINATE_MANAGER) >>> ... >>> a = ME.SparseTensor(...) >>> b = ME.SparseTensor(...) # coords_man shared >>> ... # one feed forward and backward >>> ME.clear_global_coordinate_manager() # Must use to clear the coordinates after one forward/backward
-
MinkowskiEngine.MinkowskiTensor.
sparse_tensor_operation_mode
() → MinkowskiEngine.MinkowskiTensor.SparseTensorOperationMode¶ Return the current sparse tensor operation mode.
MinkowskiEngine.MinkowskiTensorField module¶
-
class
MinkowskiEngine.MinkowskiTensorField.
TensorField
(features: torch.Tensor, coordinates: Optional[torch.Tensor] = None, tensor_stride: Union[int, collections.abc.Sequence, numpy.ndarray, torch.IntTensor] = 1, coordinate_field_map_key: Optional[MinkowskiEngineBackend._C.CoordinateMapKey] = None, coordinate_manager: Optional[MinkowskiCoordinateManager.CoordinateManager] = None, quantization_mode: MinkowskiTensor.SparseTensorQuantizationMode = <SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE: 1>, allocator_type: Optional[MinkowskiEngineBackend._C.GPUMemoryAllocatorType] = None, minkowski_algorithm: Optional[MinkowskiEngineBackend._C.MinkowskiAlgorithm] = None, requires_grad=None, device=None)¶ Bases:
MinkowskiTensor.Tensor
-
property
C
¶ The alias of
coords
.
-
coordinate_field_map_key
¶
-
property
coordinates
¶ The coordinates of the current sparse tensor. The coordinates are represented as a \(N \times (D + 1)\) dimensional matrix where \(N\) is the number of points in the space and \(D\) is the dimension of the space (e.g. 3 for 3D, 4 for 3D + Time). Additional dimension of the column of the matrix C is for batch indices which is internally treated as an additional spatial dimension to disassociate different instances in a batch.
-
inverse_mapping
(sparse_tensor_map_key: MinkowskiEngineBackend._C.CoordinateMapKey)¶
-
quantization_mode
¶
-
sparse
(tensor_stride: Union[int, collections.abc.Sequence, numpy.array] = 1, coordinate_map_key: Optional[MinkowskiEngineBackend._C.CoordinateMapKey] = None, quantization_mode=None)¶ Converts the current sparse tensor field to a sparse tensor.
-
property
MinkowskiEngine.MinkowskiUnion module¶
-
class
MinkowskiEngine.MinkowskiUnion.
MinkowskiUnion
¶ Bases:
torch.nn.modules.module.Module
Create a union of all sparse tensors and add overlapping features.
- Args:
None
Warning
This function is experimental and the usage can be changed in the future updates.
-
forward
(*inputs)¶ - Args:
A variable number of
MinkowskiEngine.SparseTensor
’s.- Returns:
A
MinkowskiEngine.SparseTensor
with coordinates = union of all input coordinates, and features = sum of all features corresponding to the coordinate.
Example:
>>> # Define inputs >>> input1 = SparseTensor( >>> torch.rand(N, in_channels, dtype=torch.double), coords=coords) >>> # All inputs must share the same coordinate manager >>> input2 = SparseTensor( >>> torch.rand(N, in_channels, dtype=torch.double), >>> coords=coords + 1, >>> coords_manager=input1.coordinate_manager, # Must use same coords manager >>> force_creation=True # The tensor stride [1, 1] already exists. >>> ) >>> union = MinkowskiUnion() >>> output = union(input1, iput2)
-
training
: bool¶
-
class
MinkowskiEngine.MinkowskiUnion.
MinkowskiUnionFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_out_feat)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, in_coords_keys: list, out_coords_key: MinkowskiEngineBackend._C.CoordinateMapKey, coordinate_manager: MinkowskiCoordinateManager.CoordinateManager, *in_feats)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
MinkowskiEngine.diagnostics module¶
-
MinkowskiEngine.diagnostics.
parse_nvidia_smi
()¶
-
MinkowskiEngine.diagnostics.
print_diagnostics
()¶
MinkowskiEngine.sparse_matrix_functions module¶
-
class
MinkowskiEngine.sparse_matrix_functions.
MinkowskiSPMMAverageFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad: torch.Tensor)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, rows: torch.Tensor, cols: torch.Tensor, size: torch.Size, mat: torch.Tensor, cuda_spmm_alg: int = 1)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
-
class
MinkowskiEngine.sparse_matrix_functions.
MinkowskiSPMMFunction
¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad: torch.Tensor)¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, rows: torch.Tensor, cols: torch.Tensor, vals: torch.Tensor, size: torch.Size, mat: torch.Tensor, cuda_spmm_alg: int = 1)¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
-
MinkowskiEngine.sparse_matrix_functions.
spmm
(rows: torch.Tensor, cols: torch.Tensor, vals: torch.Tensor, size: torch.Size, mat: torch.Tensor, is_sorted: bool = False, cuda_spmm_alg: int = 1) → torch.Tensor¶
-
MinkowskiEngine.sparse_matrix_functions.
spmm_average
(rows: torch.Tensor, cols: torch.Tensor, size: torch.Size, mat: torch.Tensor, cuda_spmm_alg: int = 1) -> (<class 'torch.Tensor'>, <class 'torch.Tensor'>, <class 'torch.Tensor'>)¶