Vision-based perception systems are typically exposed to large orientation changes in different robot applications. In such conditions, their performance might be compromised due to the inherent complexity of processing data captured under challenging motion. Integration of mechanical stabilizers to compensate for the camera rotation is not always possible due to the robot payload constraints. This paper presents a processing-based stabilization approach to compensate the camera's rotational motion both on events and on frames (i.e., images). Assuming that the camera's attitude is available, we evaluate the benefits of stabilization in two perception applications: feature tracking and estimating the translation component of the camera's ego-motion. The validation is performed using synthetic data and sequences from well-known event-based vision datasets. The experiments unveil that stabilization can improve feature tracking and camera ego-motion estimation accuracy in 27.37% and 34.82%, respectively. Concurrently, stabilization can reduce the processing time of computing the camera's linear velocity by at least 25%. Code is available at https://github.com/tub-rip/visual_stabilization
翻译:基于视觉的感知系统在不同机器人应用中通常面临大幅度的姿态变化。在此类条件下,由于处理在挑战性运动下捕获的数据具有固有复杂性,系统性能可能受到影响。受限于机器人有效载荷,集成机械稳定器以补偿相机旋转并非总是可行。本文提出一种基于处理的稳定化方法,用于在事件流与帧(即图像)上补偿相机的旋转运动。假设相机姿态已知,我们评估了稳定化在两项感知任务中的效益:特征跟踪与相机自运动平移分量的估计。验证工作采用合成数据及知名事件视觉数据集中的序列进行。实验表明,稳定化技术可分别将特征跟踪与相机自运动估计的精度提升27.37%与34.82%。同时,稳定化处理可将相机线速度的计算时间至少减少25%。代码发布于 https://github.com/tub-rip/visual_stabilization