Joint Alignment (JA) of images aims to align a collection of images into a unified coordinate frame, such that semantically-similar features appear at corresponding spatial locations. Most existing approaches often require long training times, large-capacity models, and extensive hyperparameter tuning. We introduce FastJAM, a rapid, graph-based method that drastically reduces the computational complexity of joint alignment tasks. FastJAM leverages pairwise matches computed by an off-the-shelf image matcher, together with a rapid nonparametric clustering, to construct a graph representing intra- and inter-image keypoint relations. A graph neural network propagates and aggregates these correspondences, efficiently predicting per-image homography parameters via image-level pooling. Utilizing an inverse-compositional loss, that eliminates the need for a regularization term over the predicted transformations (and thus also obviates the hyperparameter tuning associated with such terms), FastJAM performs image JA quickly and effectively. Experimental results on several benchmarks demonstrate that FastJAM achieves results better than existing modern JA methods in terms of alignment quality, while reducing computation time from hours or minutes to mere seconds. Our code is available at our project webpage, https://bgu-cs-vil.github.io/FastJAM/
翻译:图像联合对齐(JA)旨在将一组图像对齐到统一的坐标系中,使得语义相似的特征出现在对应的空间位置上。现有的大多数方法通常需要较长的训练时间、大容量模型以及大量的超参数调优。本文提出FastJAM,一种基于图的快速方法,能显著降低联合对齐任务的计算复杂度。FastJAM利用现成的图像匹配器计算出的成对匹配,结合快速非参数聚类,构建一个表示图像内部和图像之间关键点关系的图。图神经网络传播并聚合这些对应关系,通过图像级池化高效预测每幅图像的单应性参数。通过采用逆合成损失(该损失消除了对预测变换的正则化项需求,从而也避免了与此类项相关的超参数调优),FastJAM能够快速有效地执行图像联合对齐。在多个基准测试上的实验结果表明,FastJAM在配准质量方面优于现有的现代联合对齐方法,同时将计算时间从数小时或数分钟缩短至仅数秒。我们的代码可在项目网页https://bgu-cs-vil.github.io/FastJAM/获取。