We present a novel two-view geometry estimation framework which is based on a differentiable robust loss function fitting. We propose to treat the robust fundamental matrix estimation as an implicit layer, which allows us to avoid backpropagation through time and significantly improves the numerical stability. To take full advantage of the information from the feature matching stage we incorporate learnable weights that depend on the matching confidences. In this way our solution brings together feature extraction, matching and two-view geometry estimation in a unified end-to-end trainable pipeline. We evaluate our approach on the camera pose estimation task in both outdoor and indoor scenarios. The experiments on several datasets show that the proposed method outperforms both classic and learning-based state-of-the-art methods by a large margin. The project webpage is available at: https://github.com/VladPyatov/ihls
翻译:我们提出了一种新颖的双视图几何估计框架,该框架基于可微分的鲁棒损失函数拟合。我们将鲁棒基础矩阵估计视为一个隐式层,从而避免了随时间反向传播,并显著提高了数值稳定性。为充分利用特征匹配阶段的信息,我们引入了依赖于匹配置信度的可学习权重。通过这种方式,我们的解决方案将特征提取、匹配和双视图几何估计统一到一个端到端可训练流程中。我们在室内外场景下的相机姿态估计任务中评估了所提方法。在多个数据集上的实验表明,该方法大幅超越了经典方法和基于学习的最新先进方法。项目网页地址为:https://github.com/VladPyatov/ihls