ncore.sensors Package#
Package exposing methods related to NCore’s sensor types
- class ncore.sensors.BivariateWindshieldModel(
- windshield_distortion_parameters: BivariateWindshieldModelParameters,
- device: str | device = device(type='cuda'),
- dtype: dtype = torch.float32,
Bases:
ExternalDistortionModelImplements an external distortion caused by a vehicle’s windshield. The model is only applicable for cameras where the whole area of interest is projected through the windshield.
The distortion is computed on spherical phi/theta angle-based representations of a sensor ray with direction=[x,y,z] such that phi = asin(x/(x^2+y+2+z^2)) and theta = asin(y/(x^2+y^2+z^2)).
Phi and theta are then deflected via distortion polynomials before the transformed ray is re-constructed with direction [sin(phi’), sin(theta’), 1-(x^2+y^2)] The distortion on phi and theta are computed via separate polynomials of order N in both phi and theta, e.g. phi’ = c0 + c1*phi + c2*phi^2 + (c3 + c4*phi)*theta + c5*theta^2
- static compute_poly_order(
- poly_coeffs: Tensor,
Computes the order of a bivariate polynomial give it’s array of coefficients
- distort_camera_rays(
- camera_rays: Tensor,
Applies distortion to camera rays in forward direction, from external to internal
- static distort_rays(
- camera_rays: Tensor,
- phi_poly: Tensor,
- theta_poly: Tensor,
- order_phi: int,
- order_theta: int,
- poly2d_eval_func,
Applies distortion to rays in forward direction, from external to internal
- get_parameters() BivariateWindshieldModelParameters#
Returns the parameters specific to the current windshield distortion model instance
- horizontal_poly: Tensor#
Polynomial used for horizontal component of distortion in forward direction
- horizontal_poly_inverse: Tensor#
Polynomial used for horizontal component of distortion in backward direction
- static poly_eval_2d( ) Tensor#
The bivariate polynomial, provided as a single-dimension tensor [c0, c1, c2…cn] is evaluated as: c0*x^0 +c1*x^1 + c2*x^2 + (c3*x^0 + c4*x^1)y^1 + (c5*x^0)y^2 In essence, each coefficient to y is a polynomial evaluation of increasing degree.
- reference_poly: ReferencePolynomial#
Reference polynomial used for the distortion model
- class ncore.sensors.CameraModel( )#
Bases:
BaseModel,ABCBase class for all camera models
- class ImagePointsReturn( )#
Bases:
object- Contains
image point coordinates [float] (n,2)
valid_flag [bool] (n,)
[optional] Jacobians of the projection [float] (n,2,3)
- class WorldPointsToImagePointsReturn(
- image_points: Tensor,
- T_world_sensors: Tensor | None = None,
- valid_indices: Tensor | None = None,
- timestamps_us: Tensor | None = None,
Bases:
object- Contains
image point coordinates of the valid projections [float] (n,2)
[optional] world-to-sensor poses of valid projections [float] (n,4,4)
[optional] indices of the valid projections relative to the input points [int] (n,)
[optional] timestamps of the valid projections [int] (n,)
- class WorldPointsToPixelsReturn(
- pixels: Tensor,
- T_world_sensors: Tensor | None = None,
- valid_indices: Tensor | None = None,
- timestamps_us: Tensor | None = None,
Bases:
object- Contains
pixel indices of the valid projections [int] (n,2)
[optional] world-to-sensor poses of valid projections [float] (n,4,4)
[optional] indices of the valid projections relative to the input points [int] (n,)
[optional] timestamps of the valid projections [int] (n,)
- class WorldRaysReturn( )#
Bases:
object- Contains
rays [point, direction] in the world coordinate frame, represented by 3d start of ray points and 3d ray directions [float] (n,6)
[optional] sensor-to-worlds poses of the returned rays [float] (n,4,4)
[optional] timestamps of the returned rays [int] (n,)
- camera_rays_to_image_points( ) ImagePointsReturn#
For each camera ray, computes the corresponding image point coordinates and a valid flag. Optionally, the Jacobians of the per-ray transformations can be computed as well
- camera_rays_to_pixels( ) PixelsReturn#
For each camera ray, computes the corresponding pixel index and a valid flag
- external_distortion: ExternalDistortionModel | None#
Source of distortion external to the camera (e.g. windshield). Can be empty (None) if no such source exists.
- static from_parameters(
- cam_model_parameters: FThetaCameraModelParameters | OpenCVPinholeCameraModelParameters | OpenCVFisheyeCameraModelParameters,
- device: str | device = device(type='cuda'),
- dtype: dtype = torch.float32,
Initialize a generic camera model class from camera model parameters
- image_points_relative_frame_times( ) Tensor#
Convenience wrapper for image_points_relative_frame_times_kernel with the camera’s resolution and shutter type + tensor conversion
- static image_points_relative_frame_times_kernel(
- image_points: Tensor,
- resolution: Tensor,
- shutter_type: ShutterType,
Get relative frame-times based on the image point coordinates and rolling shutter type
- image_points_to_pixels( ) Tensor#
Given continuous image point coordinates, computes the corresponding pixel indices.
- image_points_to_world_rays_mean_pose(
- image_points: Tensor | ndarray,
- T_sensor_world_start: Tensor | ndarray,
- T_sensor_world_end: Tensor | ndarray,
- start_timestamp_us: int | None = None,
- end_timestamp_us: int | None = None,
- camera_rays: Tensor | ndarray | None = None,
- return_T_sensor_worlds: bool = False,
- return_timestamps: bool = False,
Unprojects image points to world rays using the mean pose of the sensor between the start and end poses (not compensating for potential sensor-motion).
Can optionally re-use known camera rays associated with image points.
For each image point returns 3d world rays [point, direction], represented by 3d start of ray points and 3d ray directions in the world frame
- image_points_to_world_rays_shutter_pose(
- image_points: Tensor | ndarray,
- T_sensor_world_start: Tensor | ndarray,
- T_sensor_world_end: Tensor | ndarray,
- start_timestamp_us: int | None = None,
- end_timestamp_us: int | None = None,
- camera_rays: Tensor | ndarray | None = None,
- return_T_sensor_worlds: bool = False,
- return_timestamps: bool = False,
Unprojects image points to world rays using rolling-shutter compensation of sensor motion.
Can optionally re-use known camera rays associated with image points.
For each image point returns 3d world rays [point, direction], represented by 3d start of ray points and 3d ray directions in the world frame
- image_points_to_world_rays_static_pose(
- image_points: Tensor | ndarray,
- T_sensor_world: Tensor | ndarray,
- timestamp_us: int | None = None,
- camera_rays: Tensor | ndarray | None = None,
- return_T_sensor_worlds: bool = False,
- return_timestamps: bool = False,
Unprojects image points to world rays using a using a fixed sensor pose (not compensating for potential sensor-motion).
Can optionally re-use known camera rays associated with image points.
For each image point returns 3d world rays [point, direction], represented by 3d start of ray points and 3d ray directions in the world frame
- pixels_to_image_points( ) Tensor#
Given integer-based pixels indices, computes corresponding continuous image point coordinates representing the center of each pixel.
- pixels_to_world_rays_mean_pose(
- pixel_idxs: Tensor | ndarray,
- T_sensor_world_start: Tensor | ndarray,
- T_sensor_world_end: Tensor | ndarray,
- start_timestamp_us: int | None = None,
- end_timestamp_us: int | None = None,
- camera_rays: Tensor | ndarray | None = None,
- return_T_sensor_worlds: bool = False,
- return_timestamps: bool = False,
- pixels_to_world_rays_shutter_pose(
- pixel_idxs: Tensor | ndarray,
- T_sensor_world_start: Tensor | ndarray,
- T_sensor_world_end: Tensor | ndarray,
- start_timestamp_us: int | None = None,
- end_timestamp_us: int | None = None,
- camera_rays: Tensor | ndarray | None = None,
- return_T_sensor_worlds: bool = False,
- return_timestamps: bool = False,
- pixels_to_world_rays_static_pose(
- pixel_idxs: Tensor | ndarray,
- T_sensor_world: Tensor | ndarray,
- timestamp_us: int | None = None,
- camera_rays: Tensor | ndarray | None = None,
- return_T_sensor_worlds: bool = False,
- return_timestamps: bool = False,
- shutter_type: ShutterType#
Shutter type of the camera’s imaging sensor
- world_points_to_image_points_mean_pose(
- world_points: Tensor | ndarray,
- T_world_sensor_start: Tensor | ndarray,
- T_world_sensor_end: Tensor | ndarray,
- start_timestamp_us: int | None = None,
- end_timestamp_us: int | None = None,
- return_T_world_sensors: bool = False,
- return_valid_indices: bool = False,
- return_timestamps: bool = False,
- return_all_projections: bool = False,
Projects world points to corresponding image point coordinates using the mean pose of the sensor between the start and end poses (not compensating for potential sensor-motion).
- world_points_to_image_points_shutter_pose(
- world_points: Tensor | ndarray,
- T_world_sensor_start: Tensor | ndarray,
- T_world_sensor_end: Tensor | ndarray,
- start_timestamp_us: int | None = None,
- end_timestamp_us: int | None = None,
- max_iterations: int = 10,
- stop_mean_error_px: float = 0.001,
- stop_delta_mean_error_px: float = 1e-05,
- return_T_world_sensors: bool = False,
- return_valid_indices: bool = False,
- return_timestamps: bool = False,
- return_all_projections: bool = False,
Projects world points to corresponding image point coordinates using rolling-shutter compensation of sensor motion
- world_points_to_image_points_static_pose(
- world_points: Tensor | ndarray,
- T_world_sensor: Tensor | ndarray,
- timestamp_us: int | None = None,
- return_T_world_sensors: bool = False,
- return_valid_indices: bool = False,
- return_timestamps: bool = False,
- return_all_projections: bool = False,
Projects world points to corresponding image point coordinates using a fixed sensor pose (not compensating for potential sensor-motion).
- world_points_to_pixels_mean_pose(
- world_points: Tensor | ndarray,
- T_world_sensor_start: Tensor | ndarray,
- T_world_sensor_end: Tensor | ndarray,
- start_timestamp_us: int | None = None,
- end_timestamp_us: int | None = None,
- return_T_world_sensors: bool = False,
- return_valid_indices: bool = False,
- return_timestamps: bool = False,
- return_all_projections: bool = False,
Projects world points to corresponding pixel indices using the mean pose of the sensor between the start and end poses (not compensating for potential sensor-motion).
- world_points_to_pixels_shutter_pose(
- world_points: Tensor | ndarray,
- T_world_sensor_start: Tensor | ndarray,
- T_world_sensor_end: Tensor | ndarray,
- start_timestamp_us: int | None = None,
- end_timestamp_us: int | None = None,
- max_iterations: int = 10,
- stop_mean_error_px: float = 0.001,
- stop_delta_mean_error_px: float = 1e-05,
- return_T_world_sensors: bool = False,
- return_valid_indices: bool = False,
- return_timestamps: bool = False,
- return_all_projections: bool = False,
Projects world points to corresponding pixel indices using rolling-shutter compensation of sensor motion
- world_points_to_pixels_static_pose(
- world_points: Tensor | ndarray,
- T_world_sensor: Tensor | ndarray,
- timestamp_us: int | None = None,
- return_T_world_sensors: bool = False,
- return_valid_indices: bool = False,
- return_timestamps: bool = False,
- return_all_projections: bool = False,
Projects world points to corresponding pixel indices using a fixed sensor pose (not compensating for potential sensor-motion).
- class ncore.sensors.ExternalDistortionModel( )#
Bases:
BaseModel,ABCBase class for distortion effects from external causes to the camera
- abstractmethod distort_camera_rays(
- camera_rays: Tensor,
Applies distortion to camera rays in forward direction, from external to internal
- static from_parameters(
- external_distortion_parameters: BivariateWindshieldModelParameters,
- device: str | device = device(type='cuda'),
- dtype: dtype = torch.float32,
Initialize a generic external distortion model from parameters
- abstractmethod get_parameters() BivariateWindshieldModelParameters#
Returns the parameters specific to the concrete distortion model
- class ncore.sensors.FThetaCameraModel(
- camera_model_parameters: FThetaCameraModelParameters,
- device: str | device = device(type='cuda'),
- dtype: dtype = torch.float32,
- newton_iterations: int = 3,
- min_2d_norm: float = 1e-06,
Bases:
CameraModelCamera model for F-Theta lenses
- get_parameters() FThetaCameraModelParameters#
Returns the camera model parameters specific to the current camera model instance
- reference_poly: PolynomialType#
- class ncore.sensors.LidarModel(device: str | device, dtype: dtype)#
Bases:
BaseModel,ABCBase class for all lidar models
- class SensorAnglesReturn( )#
Bases:
object- Contains
sensor angles [float] (n,2)
valid_flag [bool] (n,)
- class WorldPointsToSensorAnglesReturn(
- sensor_angles: Tensor,
- T_world_sensors: Tensor | None = None,
- valid_indices: Tensor | None = None,
- timestamps_us: Tensor | None = None,
Bases:
object- Contains
sensor angles of the valid projections [float] (n,2)
[optional] world-to-sensor poses of valid projections [float] (n,4,4)
[optional] indices of the valid projections relative to the input points [int] (n,)
[optional] timestamps of the valid projections [int] (n,)
- class WorldRaysReturn( )#
Bases:
object- Contains
rays [point, direction] in the world coordinate frame, represented by 3d start of ray points and 3d ray directions [float] (n,6)
[optional] sensor-to-worlds poses of the returned rays [float] (n,4,4)
[optional] timestamps of the returned rays [int] (n,)
- static maybe_from_parameters(
- lidar_model_parameters: RowOffsetStructuredSpinningLidarModelParameters | None,
- device: str | device = device(type='cuda'),
- dtype: dtype = torch.float32,
Initialize a generic lidar model from parameters, if available
- abstractmethod sensor_angles_to_sensor_rays(
- sensor_angles: torch.Tensor | np.ndarray,
Lidar model-specific implementation of elevation/azimuth angles to sensor rays
- abstractmethod sensor_rays_to_sensor_angles(
- sensor_rays: torch.Tensor | np.ndarray,
- normalized: bool = True,
Lidar model-specific implementation of sensor_rays_to_sensor_angles
- class ncore.sensors.OpenCVFisheyeCameraModel(
- camera_model_parameters: OpenCVFisheyeCameraModelParameters,
- device: str | device = device(type='cuda'),
- dtype: dtype = torch.float32,
- newton_iterations: int = 3,
- min_2d_norm: float = 1e-06,
Bases:
CameraModelCamera model for OpenCV fisheye cameras
- get_parameters() OpenCVFisheyeCameraModelParameters#
Returns the camera model parameters specific to the current camera model instance
- class ncore.sensors.OpenCVPinholeCameraModel(
- camera_model_parameters: OpenCVPinholeCameraModelParameters,
- device: str | device = device(type='cuda'),
- dtype: dtype = torch.float32,
Bases:
CameraModelCamera model for OpenCV pinhole cameras
- get_parameters() OpenCVPinholeCameraModelParameters#
Returns the camera model parameters specific to the current camera model instance
- class ncore.sensors.RowOffsetStructuredSpinningLidarModel(
- parameters: RowOffsetStructuredSpinningLidarModelParameters,
- angles_to_columns_map_resolution_factor: int = 4,
- angles_to_columns_map_dtype: dtype = torch.int16,
- angles_to_columns_map_init: bool = False,
- device: str | device = device(type='cuda'),
- dtype: dtype = torch.float32,
- fov_eps_factor: float = 4.0,
Bases:
StructuredLidarModelRepresents a structured spinning lidar model that is using a per-row azimuth-offset (compatible with, e.g., Hesai P128 sensors)
- elements_to_sensor_angles( ) Tensor#
Retrieves the elevation and azimuth angles for elements in the structured lidar model. Elements are given as (row, column) indices.
- elements_to_world_rays_shutter_pose(
- elements: Tensor | ndarray,
- T_sensor_world_start: Tensor | ndarray,
- T_sensor_world_end: Tensor | ndarray,
- start_timestamp_us: int | None = None,
- end_timestamp_us: int | None = None,
- sensor_rays: Tensor | ndarray | None = None,
- return_T_sensor_worlds: bool = False,
- return_timestamps: bool = False,
Unprojects elements to world rays using rolling-shutter compensation of sensor motion.
Can optionally re-use known sensor rays associated with elements.
For each element returns 3d world rays [point, direction], represented by 3d start of ray points and 3d ray directions in the world frame
- fov_horiz: FOV#
- fov_vert: FOV#
- get_parameters() RowOffsetStructuredSpinningLidarModelParameters#
Returns the lidar model parameters specific to the current lidar model instance
- sensor_angles_relative_frame_times( ) Tensor#
Get relative frame-times of sensor angle coordinates using internal angle to column mapping.
All sensor angles need to be in the FOV of the sensor.
- sensor_angles_to_sensor_rays( ) SensorRayReturn#
Computes the sensor rays for elevation/azimuth angles.
- sensor_rays_to_sensor_angles( ) SensorAnglesReturn#
Computes the elevation and azimuth angles for normalized 3d sensor rays.
- world_points_to_sensor_angles_shutter_pose(
- world_points: Tensor | ndarray,
- T_world_sensor_start: Tensor | ndarray,
- T_world_sensor_end: Tensor | ndarray,
- start_timestamp_us: int | None = None,
- end_timestamp_us: int | None = None,
- max_iterations: int = 10,
- stop_mean_relative_time_error: float = 0.0001,
- stop_delta_mean_relative_time_error: float = 1e-06,
- return_T_world_sensors: bool = False,
- return_valid_indices: bool = False,
- return_timestamps: bool = False,
Projects world points to corresponding sensor angle coordinates using rolling-shutter compensation of sensor motion
- class ncore.sensors.StructuredLidarModel(device: str | device, dtype: dtype)#
Bases:
LidarModel,ABC- abstractmethod elements_to_sensor_angles( ) Tensor#
Lidar model-specific implementation of elements_to_sensor_angles
- elements_to_sensor_points( ) Tensor#
Computes 3d sensor points for elements in the structured lidar model. Elements are given as (row, column) indices.
- elements_to_sensor_rays( ) Tensor#
Computes normalized 3d sensor ray directions for elements in the structured lidar model. Elements are given as (row, column) indices.
- static maybe_from_parameters(
- lidar_model_parameters: RowOffsetStructuredSpinningLidarModelParameters | None,
- device: str | device = device(type='cuda'),
- dtype: dtype = torch.float32,
Initialize a generic lidar model from parameters, if available