Miscellanea

Controlling the number of threads

The kernel map [1] defines which row of an input feature matrix to which row of the output feature matrix. This however is an expensive operation as the dimension increases. Fortunately, some part of the operation can be parallelized and we provide a multi-threaded function to speed up this process.

By default, we use all CPU threads available in the system. However, this might not be desirable in some cases. Simply define an environmental variable OMP_NUM_THREADS to control the number of threads you want to use. For example, export OMP_NUM_THREADS=8; python your_program.py. If you use SLURM, the environment variable OMP_NUM_THREADS will be automatically set.

is_cuda_available

MinkowskiEngine.is_cuda_available() → bool

cuda_version

MinkowskiEngine.cuda_version() → int

get_gpu_memory_info

MinkowskiEngine.get_gpu_memory_info() → Tuple[int, int]

set_memory_manager_backend

MinkowskiEngine.set_memory_manager_backend(backend: MinkowskiEngineBackend._C.GPUMemoryAllocatorType)

Alias for set_gpu_allocator. Deprecated and will be removed.