ncore.sensors Package#

Package exposing methods related to NCore’s sensor types

class ncore.sensors.BivariateWindshieldModel(
windshield_distortion_parameters: BivariateWindshieldModelParameters,
device: str | device = device(type='cuda'),
dtype: dtype = torch.float32,
)#

Bases: ExternalDistortionModel

Implements an external distortion caused by a vehicle’s windshield. The model is only applicable for cameras where the whole area of interest is projected through the windshield.

The distortion is computed on spherical phi/theta angle-based representations of a sensor ray with direction=[x,y,z] such that phi = asin(x/(x^2+y+2+z^2)) and theta = asin(y/(x^2+y^2+z^2)).

Phi and theta are then deflected via distortion polynomials before the transformed ray is re-constructed with direction [sin(phi’), sin(theta’), 1-(x^2+y^2)] The distortion on phi and theta are computed via separate polynomials of order N in both phi and theta, e.g. phi’ = c0 + c1*phi + c2*phi^2 + (c3 + c4*phi)*theta + c5*theta^2

static compute_poly_order(
poly_coeffs: Tensor,
)#

Computes the order of a bivariate polynomial give it’s array of coefficients

distort_camera_rays(
camera_rays: Tensor,
) Tensor#

Applies distortion to camera rays in forward direction, from external to internal

static distort_rays(
camera_rays: Tensor,
phi_poly: Tensor,
theta_poly: Tensor,
order_phi: int,
order_theta: int,
poly2d_eval_func,
) Tensor#

Applies distortion to rays in forward direction, from external to internal

get_parameters() BivariateWindshieldModelParameters#

Returns the parameters specific to the current windshield distortion model instance

horizontal_poly: Tensor#

Polynomial used for horizontal component of distortion in forward direction

horizontal_poly_inverse: Tensor#

Polynomial used for horizontal component of distortion in backward direction

order_phi: int#

Order of the distortion polynomial on phi

order_theta: int#

Order of the distortion polynomial on theta

static poly_eval_2d(
coefficients: Tensor,
x: Tensor,
y: Tensor,
order: int,
) Tensor#

The bivariate polynomial, provided as a single-dimension tensor [c0, c1, c2…cn] is evaluated as: c0*x^0 +c1*x^1 + c2*x^2 + (c3*x^0 + c4*x^1)y^1 + (c5*x^0)y^2 In essence, each coefficient to y is a polynomial evaluation of increasing degree.

reference_poly: ReferencePolynomial#

Reference polynomial used for the distortion model

undistort_camera_rays(
camera_rays: Tensor,
) Tensor#

Applies distortion to camera rays in backward direction, from external to internal

vertical_poly: Tensor#

Polynomial used for vertical component of distortion in forward direction

vertical_poly_inverse: Tensor#

Polynomial used for vertical component of distortion in backward direction

class ncore.sensors.CameraModel(
camera_model_parameters: CameraModelParameters,
device: str | device,
dtype: dtype,
)#

Bases: BaseModel, ABC

Base class for all camera models

class ImagePointsReturn(
image_points: Tensor,
valid_flag: Tensor,
jacobians: Tensor | None = None,
)#

Bases: object

Contains
  • image point coordinates [float] (n,2)

  • valid_flag [bool] (n,)

  • [optional] Jacobians of the projection [float] (n,2,3)

image_points: Tensor#
jacobians: Tensor | None = None#
valid_flag: Tensor#
class PixelsReturn(
pixels: Tensor,
valid_flag: Tensor,
)#

Bases: object

Contains
  • pixel indices [int] (n,2)

  • valid_flag [bool] (n,)

pixels: Tensor#
valid_flag: Tensor#
class WorldPointsToImagePointsReturn(
image_points: Tensor,
T_world_sensors: Tensor | None = None,
valid_indices: Tensor | None = None,
timestamps_us: Tensor | None = None,
)#

Bases: object

Contains
  • image point coordinates of the valid projections [float] (n,2)

  • [optional] world-to-sensor poses of valid projections [float] (n,4,4)

  • [optional] indices of the valid projections relative to the input points [int] (n,)

  • [optional] timestamps of the valid projections [int] (n,)

T_world_sensors: Tensor | None = None#
image_points: Tensor#
timestamps_us: Tensor | None = None#
valid_indices: Tensor | None = None#
class WorldPointsToPixelsReturn(
pixels: Tensor,
T_world_sensors: Tensor | None = None,
valid_indices: Tensor | None = None,
timestamps_us: Tensor | None = None,
)#

Bases: object

Contains
  • pixel indices of the valid projections [int] (n,2)

  • [optional] world-to-sensor poses of valid projections [float] (n,4,4)

  • [optional] indices of the valid projections relative to the input points [int] (n,)

  • [optional] timestamps of the valid projections [int] (n,)

T_world_sensors: Tensor | None = None#
pixels: Tensor#
timestamps_us: Tensor | None = None#
valid_indices: Tensor | None = None#
class WorldRaysReturn(
world_rays: Tensor,
T_sensor_worlds: Tensor | None = None,
timestamps_us: Tensor | None = None,
)#

Bases: object

Contains
  • rays [point, direction] in the world coordinate frame, represented by 3d start of ray points and 3d ray directions [float] (n,6)

  • [optional] sensor-to-worlds poses of the returned rays [float] (n,4,4)

  • [optional] timestamps of the returned rays [int] (n,)

T_sensor_worlds: Tensor | None = None#
timestamps_us: Tensor | None = None#
world_rays: Tensor#
camera_rays_to_image_points(
cam_rays: Tensor | ndarray,
return_jacobians: bool = False,
) ImagePointsReturn#

For each camera ray, computes the corresponding image point coordinates and a valid flag. Optionally, the Jacobians of the per-ray transformations can be computed as well

camera_rays_to_pixels(
cam_rays: Tensor | ndarray,
) PixelsReturn#

For each camera ray, computes the corresponding pixel index and a valid flag

external_distortion: ExternalDistortionModel | None#

Source of distortion external to the camera (e.g. windshield). Can be empty (None) if no such source exists.

static from_parameters(
cam_model_parameters: FThetaCameraModelParameters | OpenCVPinholeCameraModelParameters | OpenCVFisheyeCameraModelParameters,
device: str | device = device(type='cuda'),
dtype: dtype = torch.float32,
) CameraModel#

Initialize a generic camera model class from camera model parameters

image_points_relative_frame_times(
image_points: Tensor | ndarray,
) Tensor#

Convenience wrapper for image_points_relative_frame_times_kernel with the camera’s resolution and shutter type + tensor conversion

static image_points_relative_frame_times_kernel(
image_points: Tensor,
resolution: Tensor,
shutter_type: ShutterType,
) Tensor#

Get relative frame-times based on the image point coordinates and rolling shutter type

image_points_to_camera_rays(
image_points: Tensor | ndarray,
) Tensor#

Computes camera rays for each image point

image_points_to_pixels(
image_points: Tensor | ndarray,
) Tensor#

Given continuous image point coordinates, computes the corresponding pixel indices.

image_points_to_world_rays_mean_pose(
image_points: Tensor | ndarray,
T_sensor_world_start: Tensor | ndarray,
T_sensor_world_end: Tensor | ndarray,
start_timestamp_us: int | None = None,
end_timestamp_us: int | None = None,
camera_rays: Tensor | ndarray | None = None,
return_T_sensor_worlds: bool = False,
return_timestamps: bool = False,
) WorldRaysReturn#

Unprojects image points to world rays using the mean pose of the sensor between the start and end poses (not compensating for potential sensor-motion).

Can optionally re-use known camera rays associated with image points.

For each image point returns 3d world rays [point, direction], represented by 3d start of ray points and 3d ray directions in the world frame

image_points_to_world_rays_shutter_pose(
image_points: Tensor | ndarray,
T_sensor_world_start: Tensor | ndarray,
T_sensor_world_end: Tensor | ndarray,
start_timestamp_us: int | None = None,
end_timestamp_us: int | None = None,
camera_rays: Tensor | ndarray | None = None,
return_T_sensor_worlds: bool = False,
return_timestamps: bool = False,
) WorldRaysReturn#

Unprojects image points to world rays using rolling-shutter compensation of sensor motion.

Can optionally re-use known camera rays associated with image points.

For each image point returns 3d world rays [point, direction], represented by 3d start of ray points and 3d ray directions in the world frame

image_points_to_world_rays_static_pose(
image_points: Tensor | ndarray,
T_sensor_world: Tensor | ndarray,
timestamp_us: int | None = None,
camera_rays: Tensor | ndarray | None = None,
return_T_sensor_worlds: bool = False,
return_timestamps: bool = False,
) WorldRaysReturn#

Unprojects image points to world rays using a using a fixed sensor pose (not compensating for potential sensor-motion).

Can optionally re-use known camera rays associated with image points.

For each image point returns 3d world rays [point, direction], represented by 3d start of ray points and 3d ray directions in the world frame

pixels_to_camera_rays(
pixel_idxs: Tensor | ndarray,
) Tensor#

For each pixel index computes its corresponding camera ray

pixels_to_image_points(
pixel_idxs: Tensor | ndarray,
) Tensor#

Given integer-based pixels indices, computes corresponding continuous image point coordinates representing the center of each pixel.

pixels_to_world_rays_mean_pose(
pixel_idxs: Tensor | ndarray,
T_sensor_world_start: Tensor | ndarray,
T_sensor_world_end: Tensor | ndarray,
start_timestamp_us: int | None = None,
end_timestamp_us: int | None = None,
camera_rays: Tensor | ndarray | None = None,
return_T_sensor_worlds: bool = False,
return_timestamps: bool = False,
) WorldRaysReturn#
pixels_to_world_rays_shutter_pose(
pixel_idxs: Tensor | ndarray,
T_sensor_world_start: Tensor | ndarray,
T_sensor_world_end: Tensor | ndarray,
start_timestamp_us: int | None = None,
end_timestamp_us: int | None = None,
camera_rays: Tensor | ndarray | None = None,
return_T_sensor_worlds: bool = False,
return_timestamps: bool = False,
) WorldRaysReturn#
pixels_to_world_rays_static_pose(
pixel_idxs: Tensor | ndarray,
T_sensor_world: Tensor | ndarray,
timestamp_us: int | None = None,
camera_rays: Tensor | ndarray | None = None,
return_T_sensor_worlds: bool = False,
return_timestamps: bool = False,
) WorldRaysReturn#
resolution: Tensor#

Width and height of the image in pixels (int32, [2,])

shutter_type: ShutterType#

Shutter type of the camera’s imaging sensor

world_points_to_image_points_mean_pose(
world_points: Tensor | ndarray,
T_world_sensor_start: Tensor | ndarray,
T_world_sensor_end: Tensor | ndarray,
start_timestamp_us: int | None = None,
end_timestamp_us: int | None = None,
return_T_world_sensors: bool = False,
return_valid_indices: bool = False,
return_timestamps: bool = False,
return_all_projections: bool = False,
) WorldPointsToImagePointsReturn#

Projects world points to corresponding image point coordinates using the mean pose of the sensor between the start and end poses (not compensating for potential sensor-motion).

world_points_to_image_points_shutter_pose(
world_points: Tensor | ndarray,
T_world_sensor_start: Tensor | ndarray,
T_world_sensor_end: Tensor | ndarray,
start_timestamp_us: int | None = None,
end_timestamp_us: int | None = None,
max_iterations: int = 10,
stop_mean_error_px: float = 0.001,
stop_delta_mean_error_px: float = 1e-05,
return_T_world_sensors: bool = False,
return_valid_indices: bool = False,
return_timestamps: bool = False,
return_all_projections: bool = False,
) WorldPointsToImagePointsReturn#

Projects world points to corresponding image point coordinates using rolling-shutter compensation of sensor motion

world_points_to_image_points_static_pose(
world_points: Tensor | ndarray,
T_world_sensor: Tensor | ndarray,
timestamp_us: int | None = None,
return_T_world_sensors: bool = False,
return_valid_indices: bool = False,
return_timestamps: bool = False,
return_all_projections: bool = False,
) WorldPointsToImagePointsReturn#

Projects world points to corresponding image point coordinates using a fixed sensor pose (not compensating for potential sensor-motion).

world_points_to_pixels_mean_pose(
world_points: Tensor | ndarray,
T_world_sensor_start: Tensor | ndarray,
T_world_sensor_end: Tensor | ndarray,
start_timestamp_us: int | None = None,
end_timestamp_us: int | None = None,
return_T_world_sensors: bool = False,
return_valid_indices: bool = False,
return_timestamps: bool = False,
return_all_projections: bool = False,
) WorldPointsToPixelsReturn#

Projects world points to corresponding pixel indices using the mean pose of the sensor between the start and end poses (not compensating for potential sensor-motion).

world_points_to_pixels_shutter_pose(
world_points: Tensor | ndarray,
T_world_sensor_start: Tensor | ndarray,
T_world_sensor_end: Tensor | ndarray,
start_timestamp_us: int | None = None,
end_timestamp_us: int | None = None,
max_iterations: int = 10,
stop_mean_error_px: float = 0.001,
stop_delta_mean_error_px: float = 1e-05,
return_T_world_sensors: bool = False,
return_valid_indices: bool = False,
return_timestamps: bool = False,
return_all_projections: bool = False,
) WorldPointsToPixelsReturn#

Projects world points to corresponding pixel indices using rolling-shutter compensation of sensor motion

world_points_to_pixels_static_pose(
world_points: Tensor | ndarray,
T_world_sensor: Tensor | ndarray,
timestamp_us: int | None = None,
return_T_world_sensors: bool = False,
return_valid_indices: bool = False,
return_timestamps: bool = False,
return_all_projections: bool = False,
) WorldPointsToPixelsReturn#

Projects world points to corresponding pixel indices using a fixed sensor pose (not compensating for potential sensor-motion).

class ncore.sensors.ExternalDistortionModel(
device: str | device,
dtype: dtype,
)#

Bases: BaseModel, ABC

Base class for distortion effects from external causes to the camera

abstractmethod distort_camera_rays(
camera_rays: Tensor,
) Tensor#

Applies distortion to camera rays in forward direction, from external to internal

static from_parameters(
external_distortion_parameters: BivariateWindshieldModelParameters,
device: str | device = device(type='cuda'),
dtype: dtype = torch.float32,
) ExternalDistortionModel#

Initialize a generic external distortion model from parameters

abstractmethod get_parameters() BivariateWindshieldModelParameters#

Returns the parameters specific to the concrete distortion model

abstractmethod undistort_camera_rays(
camera_rays: Tensor,
) Tensor#

Applies distortion to camera rays in backward direction, from internal to external

class ncore.sensors.FThetaCameraModel(
camera_model_parameters: FThetaCameraModelParameters,
device: str | device = device(type='cuda'),
dtype: dtype = torch.float32,
newton_iterations: int = 3,
min_2d_norm: float = 1e-06,
)#

Bases: CameraModel

Camera model for F-Theta lenses

A: Tensor#
Ainv: Tensor#
bw_poly: Tensor#
dbw_poly: Tensor#
dfw_poly: Tensor#
fw_poly: Tensor#
get_parameters() FThetaCameraModelParameters#

Returns the camera model parameters specific to the current camera model instance

max_angle: float#
min_2d_norm: Tensor#
newton_iterations: int#
principal_point: Tensor#
reference_poly: PolynomialType#
class ncore.sensors.LidarModel(device: str | device, dtype: dtype)#

Bases: BaseModel, ABC

Base class for all lidar models

class SensorAnglesReturn(
sensor_angles: Tensor,
valid_flag: Tensor,
)#

Bases: object

Contains
  • sensor angles [float] (n,2)

  • valid_flag [bool] (n,)

sensor_angles: Tensor#
valid_flag: Tensor#
class SensorRayReturn(
sensor_rays: Tensor,
valid_flag: Tensor,
)#

Bases: object

Contains
  • sensor rays [float] (n,3)

  • valid_flag [bool] (n,)

sensor_rays: Tensor#
valid_flag: Tensor#
class WorldPointsToSensorAnglesReturn(
sensor_angles: Tensor,
T_world_sensors: Tensor | None = None,
valid_indices: Tensor | None = None,
timestamps_us: Tensor | None = None,
)#

Bases: object

Contains
  • sensor angles of the valid projections [float] (n,2)

  • [optional] world-to-sensor poses of valid projections [float] (n,4,4)

  • [optional] indices of the valid projections relative to the input points [int] (n,)

  • [optional] timestamps of the valid projections [int] (n,)

T_world_sensors: Tensor | None = None#
sensor_angles: Tensor#
timestamps_us: Tensor | None = None#
valid_indices: Tensor | None = None#
class WorldRaysReturn(
world_rays: Tensor,
T_sensor_worlds: Tensor | None = None,
timestamps_us: Tensor | None = None,
)#

Bases: object

Contains
  • rays [point, direction] in the world coordinate frame, represented by 3d start of ray points and 3d ray directions [float] (n,6)

  • [optional] sensor-to-worlds poses of the returned rays [float] (n,4,4)

  • [optional] timestamps of the returned rays [int] (n,)

T_sensor_worlds: Tensor | None = None#
timestamps_us: Tensor | None = None#
world_rays: Tensor#
static maybe_from_parameters(
lidar_model_parameters: RowOffsetStructuredSpinningLidarModelParameters | None,
device: str | device = device(type='cuda'),
dtype: dtype = torch.float32,
) LidarModel | None#

Initialize a generic lidar model from parameters, if available

abstractmethod sensor_angles_to_sensor_rays(
sensor_angles: torch.Tensor | np.ndarray,
) SensorRayReturn#

Lidar model-specific implementation of elevation/azimuth angles to sensor rays

abstractmethod sensor_rays_to_sensor_angles(
sensor_rays: torch.Tensor | np.ndarray,
normalized: bool = True,
) SensorAnglesReturn#

Lidar model-specific implementation of sensor_rays_to_sensor_angles

class ncore.sensors.OpenCVFisheyeCameraModel(
camera_model_parameters: OpenCVFisheyeCameraModelParameters,
device: str | device = device(type='cuda'),
dtype: dtype = torch.float32,
newton_iterations: int = 3,
min_2d_norm: float = 1e-06,
)#

Bases: CameraModel

Camera model for OpenCV fisheye cameras

approx_backward_poly: Tensor#
dforward_poly: Tensor#
focal_length: Tensor#
forward_poly: Tensor#
get_parameters() OpenCVFisheyeCameraModelParameters#

Returns the camera model parameters specific to the current camera model instance

max_angle: float#
min_2d_norm: Tensor#
newton_iterations: int#
principal_point: Tensor#
class ncore.sensors.OpenCVPinholeCameraModel(
camera_model_parameters: OpenCVPinholeCameraModelParameters,
device: str | device = device(type='cuda'),
dtype: dtype = torch.float32,
)#

Bases: CameraModel

Camera model for OpenCV pinhole cameras

focal_length: Tensor#
get_parameters() OpenCVPinholeCameraModelParameters#

Returns the camera model parameters specific to the current camera model instance

principal_point: Tensor#
radial_coeffs: Tensor#
tangential_coeffs: Tensor#
thin_prism_coeffs: Tensor#
class ncore.sensors.RowOffsetStructuredSpinningLidarModel(
parameters: RowOffsetStructuredSpinningLidarModelParameters,
angles_to_columns_map_resolution_factor: int = 4,
angles_to_columns_map_dtype: dtype = torch.int16,
angles_to_columns_map_init: bool = False,
device: str | device = device(type='cuda'),
dtype: dtype = torch.float32,
fov_eps_factor: float = 4.0,
)#

Bases: StructuredLidarModel

Represents a structured spinning lidar model that is using a per-row azimuth-offset (compatible with, e.g., Hesai P128 sensors)

angles_to_columns_map: Tensor | None#
angles_to_columns_map_dtype: dtype#
angles_to_columns_map_resolution_factor: int#
column_azimuths_rad: Tensor#
elements_to_sensor_angles(
elements: Tensor | ndarray,
) Tensor#

Retrieves the elevation and azimuth angles for elements in the structured lidar model. Elements are given as (row, column) indices.

elements_to_world_rays_shutter_pose(
elements: Tensor | ndarray,
T_sensor_world_start: Tensor | ndarray,
T_sensor_world_end: Tensor | ndarray,
start_timestamp_us: int | None = None,
end_timestamp_us: int | None = None,
sensor_rays: Tensor | ndarray | None = None,
return_T_sensor_worlds: bool = False,
return_timestamps: bool = False,
) WorldRaysReturn#

Unprojects elements to world rays using rolling-shutter compensation of sensor motion.

Can optionally re-use known sensor rays associated with elements.

For each element returns 3d world rays [point, direction], represented by 3d start of ray points and 3d ray directions in the world frame

fov_eps_rad: float#
fov_horiz: FOV#
fov_vert: FOV#
get_parameters() RowOffsetStructuredSpinningLidarModelParameters#

Returns the lidar model parameters specific to the current lidar model instance

n_columns: int#
n_rows: int#
row_azimuth_offsets_rad: Tensor#
row_elevations_rad: Tensor#
sensor_angles_relative_frame_times(
sensor_angles: Tensor | ndarray,
) Tensor#

Get relative frame-times of sensor angle coordinates using internal angle to column mapping.

All sensor angles need to be in the FOV of the sensor.

sensor_angles_to_sensor_rays(
sensor_angles: Tensor | ndarray,
) SensorRayReturn#

Computes the sensor rays for elevation/azimuth angles.

sensor_rays_to_sensor_angles(
sensor_rays: Tensor | ndarray,
normalized: bool = True,
) SensorAnglesReturn#

Computes the elevation and azimuth angles for normalized 3d sensor rays.

spinning_direction: Literal['cw', 'ccw']#
spinning_frequency_hz: float#
world_points_to_sensor_angles_shutter_pose(
world_points: Tensor | ndarray,
T_world_sensor_start: Tensor | ndarray,
T_world_sensor_end: Tensor | ndarray,
start_timestamp_us: int | None = None,
end_timestamp_us: int | None = None,
max_iterations: int = 10,
stop_mean_relative_time_error: float = 0.0001,
stop_delta_mean_relative_time_error: float = 1e-06,
return_T_world_sensors: bool = False,
return_valid_indices: bool = False,
return_timestamps: bool = False,
) WorldPointsToSensorAnglesReturn#

Projects world points to corresponding sensor angle coordinates using rolling-shutter compensation of sensor motion

class ncore.sensors.StructuredLidarModel(device: str | device, dtype: dtype)#

Bases: LidarModel, ABC

abstractmethod elements_to_sensor_angles(
elements: Tensor | ndarray,
) Tensor#

Lidar model-specific implementation of elements_to_sensor_angles

elements_to_sensor_points(
elements: Tensor | ndarray,
element_distances: Tensor | ndarray,
) Tensor#

Computes 3d sensor points for elements in the structured lidar model. Elements are given as (row, column) indices.

elements_to_sensor_rays(
elements: Tensor | ndarray,
) Tensor#

Computes normalized 3d sensor ray directions for elements in the structured lidar model. Elements are given as (row, column) indices.

static maybe_from_parameters(
lidar_model_parameters: RowOffsetStructuredSpinningLidarModelParameters | None,
device: str | device = device(type='cuda'),
dtype: dtype = torch.float32,
) StructuredLidarModel | None#

Initialize a generic lidar model from parameters, if available