In this paper, we introduce a Universal Motion Correction (UniMo) framework, leveraging deep neural networks to tackle the challenges of motion correction across diverse imaging modalities. Our approach employs advanced neural network architectures with equivariant filters, overcoming the limitations of current models that require iterative inference or retraining for new image modalities. UniMo enables one-time training on a single modality while maintaining high stability and adaptability for inference across multiple unseen image modalities. We developed a joint learning framework that integrates multimodal knowledge from both shape and images that faithfully improve motion correction accuracy despite image appearance variations. UniMo features a geometric deformation augmenter that enhances the robustness of global motion correction by addressing any local deformations whether they are caused by object deformations or geometric distortions, and also generates augmented data to improve the training process. Our experimental results, conducted on various datasets with four different image modalities, demonstrate that UniMo surpasses existing motion correction methods in terms of accuracy. By offering a comprehensive solution to motion correction, UniMo marks a significant advancement in medical imaging, especially in challenging applications with wide ranges of motion, such as fetal imaging. The code for this work is available online, https://github.com/IntelligentImaging/UNIMO/.
翻译:本文提出了一种通用运动校正(UniMo)框架,利用深度神经网络应对不同成像模态中的运动校正挑战。我们的方法采用具有等变滤波器的高级神经网络架构,克服了现有模型需要迭代推理或针对新图像模态重新训练的局限性。UniMo支持在单一模态上进行一次性训练,同时保持对多种未见图像模态进行推理的高稳定性和适应性。我们开发了一个联合学习框架,该框架整合了来自形状和图像的多模态知识,能够在图像外观变化的情况下有效提高运动校正的准确性。UniMo配备了一个几何形变增强器,通过处理由物体形变或几何畸变引起的任何局部变形,增强了全局运动校正的鲁棒性,并生成增强数据以改进训练过程。我们在包含四种不同图像模态的多个数据集上进行的实验结果表明,UniMo在准确性方面超越了现有的运动校正方法。通过提供全面的运动校正解决方案,UniMo标志着医学成像领域的重要进展,特别是在具有广泛运动范围的应用中,如胎儿成像。本工作的代码已在线公开:https://github.com/IntelligentImaging/UNIMO/。