MinkowskiEngine.utils package

Submodules

MinkowskiEngine.utils.collation module

class MinkowskiEngine.utils.collation.SparseCollation(limit_numpoints=- 1, dtype=torch.int32, device=None)

Bases: object

Generates collate function for coords, feats, labels.

Please refer to the training example for the usage.

Args:

limit_numpoints (int): If positive integer, limits batch size so that the number of input coordinates is below limit_numpoints. If 0 or False, concatenate all points. -1 by default.

Example:

>>> data_loader = torch.utils.data.DataLoader(
>>>     dataset,
>>>     ...,
>>>     collate_fn=SparseCollation())
>>> for d in iter(data_loader):
>>>     print(d)
MinkowskiEngine.utils.collation.batch_sparse_collate(data, dtype=torch.int32, device=None)

The wrapper function that can be used in in conjunction with torch.utils.data.DataLoader to generate inputs for a sparse tensor.

Please refer to the training example for the usage.

Args:

data: list of (coordinates, features, labels) tuples.

MinkowskiEngine.utils.collation.batched_coordinates(coords, dtype=torch.int32, device=None)

Create a ME.SparseTensor coordinates from a sequence of coordinates

Given a list of either numpy or pytorch tensor coordinates, return the batched coordinates suitable for ME.SparseTensor.

Args:

coords (a sequence of torch.Tensor or numpy.ndarray): a list of coordinates.

dtype: torch data type of the return tensor. torch.int32 by default.

Returns:

batched_coordindates (torch.Tensor): a batched coordinates.

Warning

From v0.4, the batch index will be prepended before all coordinates.

MinkowskiEngine.utils.collation.sparse_collate(coords, feats, labels=None, dtype=torch.int32, device=None)

Create input arguments for a sparse tensor the documentation.

Convert a set of coordinates and features into the batch coordinates and batch features.

Args:

coords (set of torch.Tensor or numpy.ndarray): a set of coordinates.

feats (set of torch.Tensor or numpy.ndarray): a set of features.

labels (set of torch.Tensor or numpy.ndarray): a set of labels associated to the inputs.

MinkowskiEngine.utils.coords module

MinkowskiEngine.utils.coords.get_coords_map(x, y)

Get mapping between sparse tensor 1 and sparse tensor 2.

Args:

x (MinkowskiEngine.SparseTensor): a sparse tensor with x.tensor_stride <= y.tensor_stride.

y (MinkowskiEngine.SparseTensor): a sparse tensor with x.tensor_stride <= y.tensor_stride.

Returns:

x_indices (torch.LongTensor): the indices of x that corresponds to the returned indices of y.

x_indices (torch.LongTensor): the indices of y that corresponds to the returned indices of x.

Example:

.. code-block:: python

   sp_tensor = ME.SparseTensor(features, coords=coordinates)
   out_sp_tensor = stride_2_conv(sp_tensor)

   ins, outs = get_coords_map(sp_tensor, out_sp_tensor)
   for i, o in zip(ins, outs):
      print(f"{i} -> {o}")

MinkowskiEngine.utils.gradcheck module

MinkowskiEngine.utils.gradcheck.gradcheck(func: Callable[[], Union[torch.Tensor, Sequence[torch.Tensor]]], inputs: Union[torch.Tensor, Sequence[torch.Tensor]], eps: float = 1e-06, atol: float = 1e-05, rtol: float = 0.001, raise_exception: bool = True, check_sparse_nnz: bool = False, nondet_tol: float = 0.0, check_undefined_grad: bool = True, check_grad_dtypes: bool = False) → bool

Check gradients computed via small finite differences against analytical gradients w.r.t. tensors in inputs that are of floating point or complex type and with requires_grad=True. The check between numerical and analytical gradients uses allclose(). For complex functions, no notion of Jacobian exists. Gradcheck verifies if the numerical and analytical values of Wirtinger and Conjugate Wirtinger derivative are consistent. The gradient computation is done under the assumption that the overall function has a real valued output. For functions with complex output, gradcheck compares the numerical and analytical gradients for two values of grad_output: 1 and 1j. For more details, check out complex_autograd-doc. .. note:

The default values are designed for :attr:`input` of double precision.
This check will likely fail if :attr:`input` is of less precision, e.g.,
``FloatTensor``.

Warning

If any checked tensor in input has overlapping memory, i.e., different indices pointing to the same memory address (e.g., from torch.expand()), this check will likely fail because the numerical gradients computed by point perturbation at such indices will change values at all other indices that share the same memory address.

Args:
func (function): a Python function that takes Tensor inputs and returns

a Tensor or a tuple of Tensors

inputs (tuple of Tensor or Tensor): inputs to the function eps (float, optional): perturbation for finite differences atol (float, optional): absolute tolerance rtol (float, optional): relative tolerance raise_exception (bool, optional): indicating whether to raise an exception if

the check fails. The exception gives more information about the exact nature of the failure. This is helpful when debugging gradchecks.

check_sparse_nnz (bool, optional): if True, gradcheck allows for SparseTensor input,

and for any SparseTensor at input, gradcheck will perform check at nnz positions only.

nondet_tol (float, optional): tolerance for non-determinism. When running

identical inputs through the differentiation, the results must either match exactly (default, 0.0) or be within this tolerance.

check_undefined_grad (bool, options): if True, check if undefined output grads

are supported and treated as zeros

Returns:

True if all differences satisfy allclose condition

MinkowskiEngine.utils.init module

MinkowskiEngine.utils.init.kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')

MinkowskiEngine.utils.quantization module

MinkowskiEngine.utils.quantization.fnv_hash_vec(arr)

FNV64-1A

MinkowskiEngine.utils.quantization.quantize(coords)

Returns a unique index map and an inverse index map.

Args:

coords (numpy.ndarray or torch.Tensor): a matrix of size \(N \times D\) where \(N\) is the number of points in the \(D\) dimensional space.

Returns:

unique_map (numpy.ndarray or torch.Tensor): a list of indices that defines unique coordinates. coords[unique_map] is the unique coordinates.

inverse_map (numpy.ndarray or torch.Tensor): a list of indices that defines the inverse map that recovers the original coordinates. coords[unique_map[inverse_map]] == coords

Example:

>>> unique_map, inverse_map = quantize(coords)
>>> unique_coords = coords[unique_map]
>>> print(unique_coords[inverse_map] == coords)  # True, ..., True
>>> print(coords[unique_map[inverse_map]] == coords)  # True, ..., True
MinkowskiEngine.utils.quantization.quantize_label(coords, labels, ignore_label)
MinkowskiEngine.utils.quantization.ravel_hash_vec(arr)

Ravel the coordinates after subtracting the min coordinates.

MinkowskiEngine.utils.quantization.sparse_quantize(coordinates, features=None, labels=None, ignore_label=- 100, return_index=False, return_inverse=False, return_maps_only=False, quantization_size=None, device='cpu')

Given coordinates, and features (optionally labels), the function generates quantized (voxelized) coordinates.

Args:

coordinates (numpy.ndarray or torch.Tensor): a matrix of size \(N \times D\) where \(N\) is the number of points in the \(D\) dimensional space.

features (numpy.ndarray or torch.Tensor, optional): a matrix of size \(N \times D_F\) where \(N\) is the number of points and \(D_F\) is the dimension of the features. Must have the same container as coords (i.e. if coords is a torch.Tensor, feats must also be a torch.Tensor).

labels (numpy.ndarray or torch.IntTensor, optional): integer labels associated to eah coordinates. Must have the same container as coords (i.e. if coords is a torch.Tensor, labels must also be a torch.Tensor). For classification where a set of points are mapped to one label, do not feed the labels.

ignore_label (int, optional): the int value of the IGNORE LABEL. torch.nn.CrossEntropyLoss(ignore_index=ignore_label)

return_index (bool, optional): set True if you want the indices of the quantized coordinates. False by default.

return_inverse (bool, optional): set True if you want the indices that can recover the discretized original coordinates. False by default. return_index must be True when return_reverse is True.

return_maps_only (bool, optional): if set, return the unique_map or optionally inverse map, but not the coordinates. Can be used if you don’t care about final coordinates or if you use device==cuda and you don’t need coordinates on GPU. This returns either unique_map alone or (unique_map, inverse_map) if return_inverse is set.

quantization_size (attr:float, optional): if set, will use the quanziation size to define the smallest distance between coordinates.

device (attr:str, optional): Either ‘cpu’ or ‘cuda’.

Example:

>>> unique_map, inverse_map = sparse_quantize(discrete_coords, return_index=True, return_inverse=True)
>>> unique_coords = discrete_coords[unique_map]
>>> print(unique_coords[inverse_map] == discrete_coords)  # True

quantization_size (float, list, or numpy.ndarray, optional): the length of the each side of the hyperrectangle of of the grid cell.

Example:

>>> # Segmentation
>>> criterion = torch.nn.CrossEntropyLoss(ignore_index=-100)
>>> coords, feats, labels = MinkowskiEngine.utils.sparse_quantize(
>>>     coords, feats, labels, ignore_label=-100, quantization_size=0.1)
>>> output = net(MinkowskiEngine.SparseTensor(feats, coords))
>>> loss = criterion(output.F, labels.long())
>>>
>>> # Classification
>>> criterion = torch.nn.CrossEntropyLoss(ignore_index=-100)
>>> coords, feats = MinkowskiEngine.utils.sparse_quantize(coords, feats)
>>> output = net(MinkowskiEngine.SparseTensor(feats, coords))
>>> loss = criterion(output.F, labels.long())
MinkowskiEngine.utils.quantization.unique_coordinate_map(coordinates: torch.Tensor, tensor_stride: Union[int, collections.abc.Sequence, numpy.ndarray] = 1) → Tuple[torch.IntTensor, torch.IntTensor]

Returns the unique indices and the inverse indices of the coordinates.

coordinates: torch.Tensor (Int tensor. CUDA if coordinate_map_type == CoordinateMapType.GPU) that defines the coordinates.

Example:

>>> coordinates = torch.IntTensor([[0, 0], [0, 0], [0, 1], [0, 2]])
>>> unique_map, inverse_map = unique_coordinates_map(coordinates)
>>> coordinates[unique_map] # unique coordinates
>>> torch.all(coordinates == coordinates[unique_map][inverse_map]) # True

Module contents