NV

Robotics SDK

Documentation Tutorials API RB
Getting started Installation Hello robot Simulation setup Perception Camera streams LiDAR fusion Object detection Tracking Manipulation Grasp policy Trajectory planner Deployment TensorRT runtime Edge inference

Object Detection

Real-time perception pipeline for detecting and classifying objects from camera streams on robots and edge devices.

Installation

The pipeline ships as part of the perception package. Install it alongside the runtime to get the GPU-accelerated kernels and the pretrained model registry.

pip install perception==2.4.0

Usage

Instantiate a DetectionPipeline with a pretrained model and a camera source, then iterate over detections in your control loop. The pipeline manages preprocessing, batched inference, and non-maximum suppression so your control code stays focused on policy decisions.

from isaac.perception import DetectionPipeline, CameraSource pipeline = DetectionPipeline( model="yolov8-nano", confidence=0.6, device="cuda:0", ) camera = CameraSource(topic="/front_cam/image_raw") for frame in camera.stream(): detections = pipeline.run(frame) for det in detections: print(det.label, det.bbox, det.score) Inference latency above 30 ms can break real-time control loops on mobile robots; profile end-to-end before deploying.

API reference

See the API tab for the full DetectionPipeline class signature, supported models, and event hooks. Related primitives: CameraSource, Tracker, DepthEstimator.