In MRI studies, the aggregation of imaging data from multiple acquisition sites enhances sample size but may introduce site-related variabilities that hinder consistency in subsequent analyses. Deep learning methods for image translation have emerged as a solution for harmonizing MR images across sites. In this study, we introduce IGUANe (Image Generation with Unified Adversarial Networks), an original 3D model that leverages the strengths of domain translation and straightforward application of style transfer methods for multicenter brain MR image harmonization. IGUANe extends CycleGAN by integrating an arbitrary number of domains for training through a many-to-one architecture. The framework based on domain pairs enables the implementation of sampling strategies that prevent confusion between site-related and biological variabilities. During inference, the model can be applied to any image, even from an unknown acquisition site, making it a universal generator for harmonization. Trained on a dataset comprising T1-weighted images from 11 different scanners, IGUANe was evaluated on data from unseen sites. The assessments included the transformation of MR images with traveling subjects, the preservation of pairwise distances between MR images within domains, the evolution of volumetric patterns related to age and Alzheimer$'$s disease (AD), and the performance in age regression and patient classification tasks. Comparisons with other harmonization and normalization methods suggest that IGUANe better preserves individual information in MR images and is more suitable for maintaining and reinforcing variabilities related to age and AD. Future studies may further assess IGUANe in other multicenter contexts, either using the same model or retraining it for applications to different image modalities. IGUANe is available at https://github.com/RocaVincent/iguane_harmonization.git.
翻译:在磁共振成像研究中,聚合来自多个采集站点的成像数据可增加样本量,但可能引入站点相关的变异性,从而影响后续分析的一致性。用于图像转换的深度学习方法已成为跨站点协调磁共振图像的解决方案。本研究提出IGUANe(基于统一对抗网络的图像生成模型),这是一种创新的三维模型,它结合了域转换的优势与风格迁移方法的直接应用,用于多中心脑部磁共振图像协调。IGUANe通过多对一架构扩展了CycleGAN,能够整合任意数量的域进行训练。基于域对的框架支持实施采样策略,以防止站点相关变异性和生物变异性之间的混淆。在推理阶段,该模型可应用于任何图像,即使来自未知采集站点,使其成为通用的协调生成器。在包含11台不同扫描仪的T1加权图像数据集上训练后,IGUANe在未见站点的数据上进行了评估。评估内容包括:对移动受试者的磁共振图像进行转换、保持域内磁共振图像间的成对距离、与年龄和阿尔茨海默病相关的体积模式演变,以及在年龄回归和患者分类任务中的性能。与其他协调和归一化方法的比较表明,IGUANe能更好地保留磁共振图像中的个体信息,更适用于维持和增强与年龄及阿尔茨海默病相关的变异性。未来研究可在其他多中心场景中进一步评估IGUANe,无论是使用相同模型还是针对不同图像模态重新训练。IGUANe代码发布于https://github.com/RocaVincent/iguane_harmonization.git。