Camera pose estimation from sparse correspondences is a fundamental problem in geometric computer vision and remains particularly challenging in near-field scenarios, where strong perspective effects and heterogeneous measurement noise can significantly degrade the stability of analytic PnP solutions. In this paper, we present a geometric error propagation framework for camera pose estimation based on a parallel perspective approximation. By explicitly modeling how image measurement errors propagate through perspective geometry, we derive an error transfer model that characterizes the relationship between feature point distribution, camera depth, and pose estimation uncertainty. Building on this analysis, we develop a pose estimation method that leverages parallel perspective initialization and error-aware weighting within a Gauss-Newton optimization scheme, leading to improved robustness in proximity operations. Extensive experiments on both synthetic data and real-world images, covering diverse conditions such as strong illumination, surgical lighting, and underwater low-light environments, demonstrate that the proposed approach achieves accuracy and robustness comparable to state-of-the-art analytic and iterative PnP methods, while maintaining high computational efficiency. These results highlight the importance of explicit geometric error modeling for reliable camera pose estimation in challenging near-field settings.
翻译:从稀疏对应关系中估计相机位姿是几何计算机视觉中的一个基本问题,在近场场景中尤其具有挑战性,因为强烈的透视效应和异质测量噪声会显著降低解析PnP解的稳定性。本文提出了一种基于平行透视近似的相机位姿估计几何误差传播框架。通过显式建模图像测量误差如何通过透视几何传播,我们推导出一个误差传递模型,该模型描述了特征点分布、相机深度与位姿估计不确定性之间的关系。基于此分析,我们开发了一种位姿估计方法,该方法在Gauss-Newton优化方案中利用平行透视初始化和误差感知加权,从而提高了近距操作中的鲁棒性。在合成数据和真实图像上进行的大量实验,涵盖了强光照、手术照明和水下低光环境等多种条件,结果表明,所提方法在保持高计算效率的同时,其精度和鲁棒性可与最先进的解析及迭代PnP方法相媲美。这些结果凸显了在具有挑战性的近场设置中,显式几何误差建模对于可靠相机位姿估计的重要性。