Current color fundus image registration approaches are limited, among other things, by the lack of labeled data, which is even more significant in the medical domain, motivating the use of unsupervised learning. Therefore, in this work, we develop a novel unsupervised descriptor learning method that does not rely on keypoint detection. This enables the resulting descriptor network to be agnostic to the keypoint detector used during the registration inference. To validate this approach, we perform an extensive and comprehensive comparison on the reference public retinal image registration dataset. Additionally, we test our method with multiple keypoint detectors of varied nature, even proposing some novel ones. Our results demonstrate that the proposed approach offers accurate registration, not incurring in any performance loss versus supervised methods. Additionally, it demonstrates accurate performance regardless of the keypoint detector used. Thus, this work represents a notable step towards leveraging unsupervised learning in the medical domain.
翻译:当前彩色眼底图像配准方法受限于多种因素,其中标注数据的缺乏在医学领域尤为显著,这促使了无监督学习的应用。因此,本研究提出一种新型的无监督描述符学习方法,该方法不依赖于关键点检测。这使得所得到的描述符网络在配准推理过程中能够与所使用的关键点检测器无关。为验证此方法,我们在公开的视网膜图像配准参考数据集上进行了广泛而全面的比较。此外,我们使用多种不同类型的关键点检测器测试了我们的方法,甚至提出了一些新颖的检测器。实验结果表明,所提出的方法能够实现精确的配准,与监督方法相比未出现任何性能损失。同时,无论使用何种关键点检测器,该方法均表现出准确的性能。因此,本研究代表了在医学领域利用无监督学习的重要进展。