ncore.data Package#

Package exposing methods related to NCore’s basic data types and abstract APIs

class ncore.data.BBox3(
centroid: Tuple[float, float, float],
dim: Tuple[float, float, float],
rot: Tuple[float, float, float],
)#

Bases: DataClassJsonMixin

Parameters of a 3D bounding-box

centroid: Tuple[float, float, float]#

Coordinates [meters] of the bounding-box’s centroid in the frame of reference

dim: Tuple[float, float, float]#

Extents [meters] of the local bounding-box dimensions in its local frame

classmethod from_array(
array: ndarray,
) BBox3#

Convert from convenience single-array representation

rot: Tuple[float, float, float]#

‘XYZ’ Euler rotation angles [radians] orienting the local bounding-box frame to the frame of reference

to_array() ndarray#

Convert to convenience single-array representation

class ncore.data.BivariateWindshieldModelParameters(
reference_poly: ReferencePolynomial = None,
horizontal_poly: ndarray = None,
vertical_poly: ndarray = None,
horizontal_poly_inverse: ndarray = None,
vertical_poly_inverse: ndarray = None,
)#

Bases: DataClassJsonMixin

Represents parameters required to create a windshield external distortion model

horizontal_poly: ndarray = None#

Polynomial coefficients used for forward projection on the horizontal component of a ray via it’s projected angle phi=asin(x/norm(x,y)). The polynomial is of order N in both phi and theta with the form P(phi,N)*P(theta,0) + P(phi, N-1)*P(theta,1) … + P(phi, N-N)*P(theta,N), where P(i, N) is a polynomial over “i” of degree N (float32, [(N + 1) * (N + 2) / 2,])

horizontal_poly_inverse: ndarray = None#

Polynomial coefficients used to evaluate the inverse distortion in backprojection of the horizontal component of a ray via it’s projected angle phi=asin(x/norm(x,y)). The polynomial is of order N in both phi and theta with the form P(phi,N)*P(theta,0) + P(phi, N-1)*P(theta,1) … + P(phi, N-N)*P(theta,N), where P(i, N) is a polynomial over “i” of degree N (float32, [(N + 1) * (N + 2) / 2,])

reference_poly: ReferencePolynomial = None#

Reference polynomial of the model

static type() str#

Returns a string-identifier of the external distortion model

vertical_poly: ndarray = None#

Polynomial coefficients used for forward projection on the vertical component of a ray via it’s projected angle theta=asin(y/norm(x,y)). The polynomial is of order M in both phi and theta with the form P(phi,M)*P(theta,0) + P(phi, M-1)*P(theta,1) … + P(phi, M-M)*P(theta,M), where P(i, M) is a polynomial over “i” of degree M (float32, [(M + 1) * (M + 2) / 2,])

vertical_poly_inverse: ndarray = None#

Polynomial coefficients used to evaluate the inverse distortion in backprojection of the vertical component of a ray via it’s projected angle theta=asin(y/norm(x,y)). The polynomial is of order M in both phi and theta with the form P(phi,M)*P(theta,0) + P(phi, M-1)*P(theta,1) … + P(phi, M-M)*P(theta,M), where P(i, M) is a polynomial over “i” of degree M (float32, [(M + 1) * (M + 2) / 2,])

class ncore.data.CameraSensorProtocol(*args, **kwargs)#

Bases: SensorProtocol, Protocol

CameraSensorProtocol provides unified access to a relevant subset of common NCore camera sensor APIs

class EncodedImageDataHandleProtocol(*args, **kwargs)#

Bases: Protocol

References encoded image data without loading it

get_data() EncodedImageData#

Loads the referenced encoded image data to memory

get_frame_data(
frame_index: int,
) EncodedImageData#

Returns the frame’s encoded image data

get_frame_handle(
frame_index: int,
) EncodedImageDataHandleProtocol#

Returns the frame’s encoded image data

get_frame_image(
frame_index: int,
) Image#

Returns the frame’s decoded image data

get_frame_image_array(
frame_index: int,
) npt.NDArray[np.uint8]#

Returns decoded image data as array [W,H,C]

get_mask_images() Dict[str, Image]#

Returns all available named camera mask images

property model_parameters: FThetaCameraModelParameters | OpenCVPinholeCameraModelParameters | OpenCVFisheyeCameraModelParameters#

Returns parameters specific to the camera’s intrinsic model

ncore.data.ConcreteExternalDistortionParametersUnion#

alias of BivariateWindshieldModelParameters

ncore.data.ConcreteLidarModelParametersUnion#

alias of RowOffsetStructuredSpinningLidarModelParameters

class ncore.data.CuboidTrackObservation(
track_id: str,
class_id: str,
timestamp_us: int,
reference_frame_id: str,
reference_frame_timestamp_us: int,
bbox3: BBox3,
source: LabelSource = None,
source_version: str | None = None,
)#

Bases: DataClassJsonMixin

Individual cuboid track observation relative to a reference frame

bbox3: BBox3#

Bounding-box coordinates of the object relative to the reference frame’s coordinate system

class_id: str#

String-representation of the labeled class of the object

reference_frame_id: str#

String-identifier of the reference frame (e.g., sensor name)

reference_frame_timestamp_us: int#

The timestamp of the reference frame

source: LabelSource = None#

The source for the current label

source_version: str | None = None#
timestamp_us: int#

The timestamp associated with the centroid of the observation (possibly an accurate in-frame time)

track_id: str#

Unique identifier of the object’s track this observation is associated with

transform(
target_frame_id: str,
target_frame_timestamp_us: int,
pose_graph: PoseGraphInterpolator,
anchor_frame_id: str = 'world',
) Self#

Transform the observation’s bounding box to a different reference frame.

Parameters:
  • target_frame_id – ID of the target reference frame

  • target_frame_timestamp_us – Timestamp of the target reference frame

  • pose_graph – PoseGraphInterpolator to perform the evaluation of transformations

  • anchor_frame_id – ID of the common anchor frame for transformations (default: “world”)

Returns:

A CuboidTrackObservation instance with the transformed bounding box and updated reference frame info

class ncore.data.EncodedImageData(encoded_image_data: bytes, encoded_image_format: str)#

Bases: object

Represents encoded image data of a specific format in memory

get_decoded_image() Image#

Returns decoded image from image data

get_encoded_image_data() bytes#

Returns encoded image data

get_encoded_image_format() str#

Returns encoded image format

class ncore.data.EncodedImageHandle(*args, **kwargs)#

Bases: Protocol

Protocol type to reference encoded image data (e.g., file-based, container-based, memory-based)

get_data() EncodedImageData#
class ncore.data.FThetaCameraModelParameters(
resolution: np.ndarray = None,
shutter_type: ShutterType = None,
external_distortion_parameters: ConcreteExternalDistortionParametersUnion | None = None,
principal_point: np.ndarray = None,
reference_poly: PolynomialType = None,
pixeldist_to_angle_poly: np.ndarray = None,
angle_to_pixeldist_poly: np.ndarray = None,
max_angle: float = 0.0,
linear_cde: np.ndarray = <factory>,
)#

Bases: CameraModelParameters, DataClassJsonMixin

Represents FTheta-specific camera model parameters

POLYNOMIAL_DEGREE = 6#
class PolynomialType(value)#

Bases: IntEnum

Enumerates different possible polynomial types

ANGLE_TO_PIXELDIST = 2#

Polynomial mapping angles-to-pixeldistances (also known as “forward” polynomial)

PIXELDIST_TO_ANGLE = 1#

Polynomial mapping pixeldistances-to-angles (also known as “backward” polynomial)

angle_to_pixeldist_poly: np.ndarray = None#

Coefficients of the angles-to-pixeldistances polynomial (float32, [6,])

property bw_poly: ndarray#

Alias for the pixeldistances-to-angles polynomial

property fw_poly: ndarray#

Alias for the angles-to-pixeldistances polynomial

linear_cde: np.ndarray#

Coefficients of the constrained linear term [c,d;e,1] transforming between sensor coordinates (in mm) to image coordinates (in px) (float32, [3,])

max_angle: float = 0.0#

Maximal extrinsic ray angle [rad] with the principal direction (float32)

pixeldist_to_angle_poly: np.ndarray = None#

Coefficients of the pixeldistances-to-angles polynomial (float32, [6,])

principal_point: np.ndarray = None#

U and v coordinate of the principal point, following the NVIDIA default convention for FTheta camera models in which the pixel indices represent the center of the pixel (not the top-left corners). Principal point coordinates will be adapted internally in camera model APIs to reflect the image coordinate conventions

reference_poly: PolynomialType = None#

Indicating which of the two stored polynomials is the model’s reference polynomial (the other polynomial is only an approximation)

transform(
image_domain_scale: float | Tuple[float, float],
image_domain_offset: Tuple[float, float] = (0.0, 0.0),
new_resolution: Tuple[int, int] | None = None,
) FThetaCameraModelParameters#

Applies a transformation to FTheta camera model parameter

Parameters:
  • image_domain_scale – an isotropic (if float) or anisotropic (if tuple of floats) scaling of the full image domain to a scaled image domain (e.g., to account for up-/downsampling). Resulting scaled image resolution needs to be integer if no explicit ‘new_resolution’ is provided.

  • image_domain_offset – an offset of the _scaled_ image domain (e.g., to account for cropping).

  • new_resolution – an optional new resolution to set (if None, the full scaled resolution is used).

Returns:

a transformed version of the FTheta camera model parameters

static type() str#

Returns a string-identifier of the camera model

class ncore.data.FrameTimepoint(value)#

Bases: IntEnum

Enumerates special timepoints within a frame (values used to index into buffers)

END = 1#

Requested timepoint is referencing the end time of the frame

START = 0#

Requested timepoint is referencing the start time of the frame

class ncore.data.LabelSource(value)#

Bases: IntEnum

Enumerates different sources for labels (auto, manual, GT, synthetic etc.)

AUTOLABEL = 1#

Label originates from an autolabeling pipeline

EXTERNAL = 2#

Label originates from an unspecified external source, e.g., from third-party processes

GT_ANNOTATION = 4#

Label originates from manual annotation and is considered ground-truth

GT_SYNTHETIC = 3#

Label originates from a synthetic data simulation and is considered ground-truth

class ncore.data.LidarSensorProtocol(*args, **kwargs)#

Bases: RayBundleSensorProtocol, Protocol

LidarSensorProtocol provides unified access to a relevant subset of common NCore lidar sensor APIs

get_frame_ray_bundle_model_element(
frame_index: int,
) npt.NDArray[np.uint16] | None#

Returns the per-ray model elements for a ray bundle for a specific frame, if available.

Parameters:

frame_index – Index of the frame

Returns:

Array of per-ray model elements [N,] or None if not available

get_frame_ray_bundle_return_intensity(
frame_index: int,
return_index: int = 0,
) npt.NDArray[np.float32]#

Returns the per-ray measured intensities for a ray bundle return for a specific frame.

Parameters:
  • frame_index – Index of the frame

  • return_index – Index of the ray bundle return to retrieve (for multi-return sensors)

Returns:

Array of per-ray intensities [N,]

property model_parameters: RowOffsetStructuredSpinningLidarModelParameters | None#

Returns parameters specific to the lidar’s intrinsic model (optional as not mandatory)

class ncore.data.OpenCVFisheyeCameraModelParameters(
resolution: ndarray = None,
shutter_type: ShutterType = None,
external_distortion_parameters: BivariateWindshieldModelParameters | None = None,
principal_point: ndarray = None,
focal_length: ndarray = None,
radial_coeffs: ndarray = None,
max_angle: float = 0.0,
)#

Bases: CameraModelParameters, DataClassJsonMixin

Represents Fisheye-specific (OpenCV-like) camera model parameters

focal_length: ndarray = None#

Focal lengths in u and v direction, resp., mapping (distorted) normalized camera coordinates to image coordinates relative to the principal point (float32, [2,])

max_angle: float = 0.0#

Maximal extrinsic ray angle [rad] with the principal direction (float32)

principal_point: ndarray = None#

U and v coordinate of the principal point, following the image coordinate conventions (float32, [2,])

radial_coeffs: ndarray = None#

Radial distortion coefficients radial_coeffs represent OpenCV-like [k1,k2,k3,k4] coefficients to parameterize the

transform(
image_domain_scale: float | Tuple[float, float],
image_domain_offset: Tuple[float, float] = (0.0, 0.0),
new_resolution: Tuple[int, int] | None = None,
) OpenCVFisheyeCameraModelParameters#

Applies a transformation to OpenCV fisheye camera model parameter

Parameters:
  • image_domain_scale – an isotropic (if float) or anisotropic (if tuple of floats) scaling of the full image domain to a scaled image domain (e.g., to account for up-/downsampling). Resulting scaled image resolution needs to be integer if no explicit ‘new_resolution’ is provided.

  • image_domain_offset – an offset of the _scaled_ image domain (e.g., to account for cropping).

  • new_resolution – an optional new resolution to set (if None, the full scaled resolution is used).

Returns:

a transformed version of the OpenCV fisheye camera model parameters

static type() str#

Returns a string-identifier of the camera model

class ncore.data.OpenCVPinholeCameraModelParameters(
resolution: ndarray = None,
shutter_type: ShutterType = None,
external_distortion_parameters: BivariateWindshieldModelParameters | None = None,
principal_point: ndarray = None,
focal_length: ndarray = None,
radial_coeffs: ndarray = None,
tangential_coeffs: ndarray = None,
thin_prism_coeffs: ndarray = None,
)#

Bases: CameraModelParameters, DataClassJsonMixin

Represents Pinhole-specific (OpenCV-like) camera model parameters

focal_length: ndarray = None#

Focal lengths in u and v direction, resp., mapping (distorted) normalized camera coordinates to image coordinates relative to the principal point (float32, [2,])

principal_point: ndarray = None#

U and v coordinate of the principal point, following the image coordinate conventions (float32, [2,])

radial_coeffs: ndarray = None#

Radial distortion coefficients [k1,k2,k3,k4,k5,k6] parameterizing the rational radial distortion factor \(\frac{1 + k_1r^2 + k_2r^4 + k_3r^6}{1 + k_4r^2 + k_5r^4 + k_6r^6}\) for squared norms \(r^2\) of normalized camera coordinates (float32, [6,])

tangential_coeffs: ndarray = None#

Tangential distortion coefficients [p1,p2] parameterizing the tangential distortion components \(\begin{bmatrix} 2p_1x'y' + p_2 \left(r^2 + 2{x'}^2 \right) \\ p_1 \left(r^2 + 2{y'}^2 \right) + 2p_2x'y' \end{bmatrix}\) for normalized camera coordinates \(\begin{bmatrix} x' \\ y' \end{bmatrix}\) (float32, [2,])

thin_prism_coeffs: ndarray = None#

Thins prism distortion coefficients [s1,s2,s3,s4] parameterizing the thin prism distortion components \(\begin{bmatrix} s_1r^2 + s_2r^4 \\ s_3r^2 + s_4r^4 \end{bmatrix}\) for squared norms \(r^2\) of normalized camera coordinates (float32, [4,]

transform(
image_domain_scale: float | Tuple[float, float],
image_domain_offset: Tuple[float, float] = (0.0, 0.0),
new_resolution: Tuple[int, int] | None = None,
) OpenCVPinholeCameraModelParameters#

Applies a transformation to OpenCV pinhole camera model parameter

Parameters:
  • image_domain_scale – an isotropic (if float) or anisotropic (if tuple of floats) scaling of the full image domain to a scaled image domain (e.g., to account for up-/downsampling). Resulting scaled image resolution needs to be integer if no explicit ‘new_resolution’ is provided.

  • image_domain_offset – an offset of the _scaled_ image domain (e.g., to account for cropping).

  • new_resolution – an optional new resolution to set (if None, the full scaled resolution is used).

Returns:

a transformed version of the OpenCV pinhole camera model parameters

static type() str#

Returns a string-identifier of the camera model

class ncore.data.RadarSensorProtocol(*args, **kwargs)#

Bases: RayBundleSensorProtocol, Protocol

RadarSensorProtocol provides unified access to a relevant subset of common NCore radar sensor APIs

class ncore.data.ReferencePolynomial(value)#

Bases: IntEnum

Enumerates different possible reference polynomial types

BACKWARD = 2#

The backward polynomial is the reference polynomial, the forward polynomial is it’s (approximate) inverse

FORWARD = 1#

The forward polynomial is the reference polynomial, the backward polynomial is it’s (approximate) inverse

class ncore.data.RowOffsetStructuredSpinningLidarModelParameters(
spinning_frequency_hz: float,
spinning_direction: Literal['cw', 'ccw'],
n_rows: int,
n_columns: int,
row_elevations_rad: ndarray = None,
column_azimuths_rad: ndarray = None,
row_azimuth_offsets_rad: ndarray = None,
)#

Bases: BaseStructuredSpinningLidarModelParameters, DataClassJsonMixin

Represents parameters for a structured spinning lidar model that is using a per-row azimuth-offset (compatible with, e.g., Hesai P128 sensors)

column_azimuths_rad: ndarray = None#
get_horizontal_fov(
dtype: npt.DTypeLike = <class 'numpy.float32'>,
) util.FOV#

Returns the horizontal field-of-view of the lidar model (starting at first element) in the requested dtype precision

get_vertical_fov(
dtype: npt.DTypeLike = <class 'numpy.float32'>,
) util.FOV#

Returns the vertical field-of-view of the lidar model (starting at first element) in the requested dtype precision

row_azimuth_offsets_rad: ndarray = None#
row_elevations_rad: ndarray = None#
static type() str#

Returns a string-identifier of the lidar model

class ncore.data.SensorProtocol(*args, **kwargs)#

Bases: Protocol

SensorProtocol provides unified access to a relevant subset of common NCore sensor data APIs

property T_sensor_rig: npt.NDArray[np.floating] | None#

Return static extrinsic transformation from sensor to named rig coordinate frame.

Returns the 4x4 homogeneous transformation matrix T_sensor_rig that transforms points from the sensor coordinate frame to the rig coordinate frame.

Parameters:

rig_node – Name of the rig coordinate frame (default: “rig”)

Returns:

4x4 transformation matrix if the static transformation exists, None otherwise

property frames_count: int#

The number of frames associated with the sensor

property frames_timestamps_us: npt.NDArray[np.uint64]#

The start/end timestamps of the frames associated with the sensor [(N,2) array]

get_closest_frame_index(
timestamp_us: int,
relative_frame_time: float = 1.0,
) int#

Given a timestamp, returns the frame index of the closest frame based on the specified relative frame time-point (0.0 ~= start-of-frames / 1.0 ~= end-of-frames)

get_frame_generic_data(
frame_index: int,
name: str,
) npt.NDArray[Any]#

Returns generic frame-data for a specific frame and name

get_frame_generic_data_names(
frame_index: int,
) List[str]#

List of all generic frame-data names

get_frame_generic_meta_data(
frame_index: int,
) Dict[str, Dict[str, Dict[str, JsonLike] | List[JsonLike] | str | int | float | bool | None | List[int]] | List[Dict[str, JsonLike] | List[JsonLike] | str | int | float | bool | None | List[int]] | str | int | float | bool | None | List[int]]#

Returns generic frame meta-data for a specific frame

get_frame_index_range(
start_frame_index: int | None = None,
stop_frame_index: int | None = None,
step_frame_index: int | None = None,
) range#

Returns a (potentially empty) range of frame indices following start:stop:step slice conventions, defaulting to full frame index range for absent range bound specifiers

get_frame_timestamp_us(
frame_index: int,
frame_timepoint: FrameTimepoint = FrameTimepoint.END,
) int#

Returns the timestamp of a specific frame at the specified relative frame timepoint (start or end)

get_frames_T_sensor_target(
target_node: str,
frame_indices: int | npt.NDArray[np.integer],
frame_timepoint: FrameTimepoint | None = FrameTimepoint.END,
) npt.NDArray[np.floating]#

Evaluates relative poses of the sensor at timestamps inferred from frame indices.

Computes transformation matrices T_sensor_target that transform points from the sensor coordinate frame to the target coordinate frame at specified frame times.

Parameters:
  • target_node – Name of the target coordinate frame

  • frame_indices – Individual index or array of frame indices at which to evaluate poses

  • frame_timepoint – Frame-relative timepoint (START or END). If None, returns both

Returns:

Transformation matrices with shape [frame_indices-shape,2,4,4] if frame_timepoint is None (both start and end poses), else [frame_indices-shape,4,4] (single timepoint)

get_frames_T_source_sensor(
source_node: str,
frame_indices: int | npt.NDArray[np.integer],
frame_timepoint: FrameTimepoint | None = FrameTimepoint.END,
) npt.NDArray[np.floating]#

Evaluates relative sensor-relative poses at timestamps inferred from frame indices.

Computes transformation matrices T_source_sensor that transform points from the source coordinate frame to the sensor coordinate frame at specified frame times.

Parameters:
  • source_node – Name of the source coordinate frame

  • frame_indices – Individual index or array of frame indices at which to evaluate poses

  • frame_timepoint – Frame-relative timepoint (START or END). If None, returns both

Returns:

Transformation matrices with shape [frame_indices-shape,2,4,4] if frame_timepoint is None (both start and end poses), else [frame_indices-shape,4,4] (single timepoint)

get_frames_T_source_target(
source_node: str,
target_node: str,
frame_indices: int | npt.NDArray[np.integer],
frame_timepoint: FrameTimepoint | None = FrameTimepoint.END,
) npt.NDArray[np.floating]#

Evaluates relative poses at timestamps inferred from frame indices.

Computes transformation matrices T_source_target that transform points from the source coordinate frame to the target coordinate frame at specified frame times.

Parameters:
  • source_node – Name of the source coordinate frame (defaults to sensor_id if None)

  • target_node – Name of the target coordinate frame

  • frame_indices – Individual index or array of frame indices at which to evaluate poses

  • frame_timepoint – Frame-relative timepoint (START or END). If None, returns both

Returns:

Transformation matrices with shape [frame_indices-shape,2,4,4] if frame_timepoint is None (both start and end poses), else [frame_indices-shape,4,4] (single timepoint)

get_frames_timestamps_us(
frame_timepoint: FrameTimepoint = FrameTimepoint.END,
) npt.NDArray[np.uint64]#

Returns the timestamps of all frames at the specified frame-relative timepoint (start or end) [shape (N,)]

has_frame_generic_data(
frame_index: int,
name: str,
) bool#

Signals if named generic frame-data exists

property pose_graph: PoseGraphInterpolator#

Access the sensor-associated pose graph (usually the sequence-wide one, unless overwritten)

property sensor_id: str#

The ID of the sensor

set_pose_graph(
pose_graph: PoseGraphInterpolator,
) None#

Assigns a new pose graph to the sensor instance (e.g., to overwrite / use a sensor-exclusive pose graph)

class ncore.data.SequenceLoaderProtocol(*args, **kwargs)#

Bases: Protocol

SequenceLoaderProtocol provides unified access to a relevant subset of common NCore sequence data APIs

property camera_ids: List[str]#

All camera sensor IDs in the sequence

property generic_meta_data: Dict[str, Dict[str, Dict[str, JsonLike] | List[JsonLike] | str | int | float | bool | None | List[int]] | List[Dict[str, JsonLike] | List[JsonLike] | str | int | float | bool | None | List[int]] | str | int | float | bool | None | List[int]]#

Generic meta-data associated with the sequence

get_camera_sensor(
sensor_id: str,
) CameraSensorProtocol#

Returns a camera sensor instance for a given sensor id

get_cuboid_track_observations(
timestamp_interval_us: HalfClosedInterval | None = None,
) Generator[CuboidTrackObservation]#

Returns all available cuboid track observations in the sequence.

Parameters:

timestamp_interval_us – If provided, only observations whose timestamp_us falls within this half-closed interval [start, stop) are returned. When None (default), all observations are returned.

get_lidar_sensor(
sensor_id: str,
) LidarSensorProtocol#

Returns a lidar sensor instance for a given sensor id

get_radar_sensor(
sensor_id: str,
) RadarSensorProtocol#

Returns a radar sensor instance for a given sensor id

get_sequence_meta() Dict[str, Dict[str, Dict[str, JsonLike] | List[JsonLike] | str | int | float | bool | None | List[int]] | List[Dict[str, JsonLike] | List[JsonLike] | str | int | float | bool | None | List[int]] | str | int | float | bool | None | List[int]]#

Returns sequence-wide meta-data summary (format is instance-dependent)

property lidar_ids: List[str]#

All lidar sensor IDs in the sequence

property pose_graph: PoseGraphInterpolator#

The pose graph representing all static and dynamic transformations in the sequence

property radar_ids: List[str]#

All radar sensor IDs in the sequence

reload_resources() None#

Reloads any resources used by the internal sequence loader (potentially required for multi-process data loading)

property sequence_id: str#

The unique identifier of the sequence

property sequence_paths: List[UPath]#

List of all dataset paths comprising this sequence (shards / components)

property sequence_timestamp_interval_us: HalfClosedInterval#

The time range of the sequence in microseconds

class ncore.data.ShutterType(value)#

Bases: IntEnum

Enumerates different possible camera imager shutter types

GLOBAL = 5#

Instantaneous global shutter (no rolling shutter)

ROLLING_BOTTOM_TO_TOP = 3#

Rolling shutter from bottom to top of the imager

ROLLING_LEFT_TO_RIGHT = 2#

Rolling shutter from left to right of the imager

ROLLING_RIGHT_TO_LEFT = 4#

Rolling shutter from right to left of the imager

ROLLING_TOP_TO_BOTTOM = 1#

Rolling shutter from top to bottom of the imager