Objectives: This work aims to explore the impact of multicenter data heterogeneity on deep learning brain metastases (BM) autosegmentation performance, and assess the efficacy of an incremental transfer learning technique, namely learning without forgetting (LWF), to improve model generalizability without sharing raw data. Materials and methods: A total of six BM datasets from University Hospital Erlangen (UKER), University Hospital Zurich (USZ), Stanford, UCSF, NYU and BraTS Challenge 2023 on BM segmentation were used for this evaluation. First, the multicenter performance of a convolutional neural network (DeepMedic) for BM autosegmentation was established for exclusive single-center training and for training on pooled data, respectively. Subsequently bilateral collaboration was evaluated, where a UKER pretrained model is shared to another center for further training using transfer learning (TL) either with or without LWF. Results: For single-center training, average F1 scores of BM detection range from 0.625 (NYU) to 0.876 (UKER) on respective single-center test data. Mixed multicenter training notably improves F1 scores at Stanford and NYU, with negligible improvement at other centers. When the UKER pretrained model is applied to USZ, LWF achieves a higher average F1 score (0.839) than naive TL (0.570) and single-center training (0.688) on combined UKER and USZ test data. Naive TL improves sensitivity and contouring accuracy, but compromises precision. Conversely, LWF demonstrates commendable sensitivity, precision and contouring accuracy. When applied to Stanford, similar performance was observed. Conclusion: Data heterogeneity results in varying performance in BM autosegmentation, posing challenges to model generalizability. LWF is a promising approach to peer-to-peer privacy-preserving model training.
翻译:目的:本研究旨在探索多中心数据异质性对深度学习脑转移瘤自动分割性能的影响,并评估一种增量迁移学习技术——即“无遗忘学习”(LWF)——在不共享原始数据的前提下提升模型泛化能力的有效性。材料与方法:本研究共使用六个脑转移瘤数据集,分别来自埃尔朗根大学医院(UKER)、苏黎世大学医院(USZ)、斯坦福大学、加州大学旧金山分校(UCSF)、纽约大学(NYU)以及BraTS挑战赛2023年的脑转移瘤分割数据集。首先,评估卷积神经网络(DeepMedic)在脑转移瘤自动分割中的多中心性能,包括仅基于单一中心训练和基于合并数据训练两种情况。随后,评估双边协作模式:将UKER预训练模型共享给另一中心,使用带或不带LWF的迁移学习(TL)进行进一步训练。结果:对于单一中心训练,各中心测试数据上脑转移瘤检测的平均F1分数介于0.625(NYU)至0.876(UKER)之间。混合多中心训练显著提升了斯坦福大学和NYU的F1分数,而在其他中心改善效果甚微。当UKER预训练模型应用于USZ时,在合并的UKER和USZ测试数据上,LWF实现了比朴素TL(0.570)和单一中心训练(0.688)更高的平均F1分数(0.839)。朴素TL提高了敏感性和轮廓准确性,但牺牲了精确性。相反,LWF在敏感性、精确性和轮廓准确性方面均表现良好。将该模型应用于斯坦福大学时,观察到了相似性能。结论:数据异质性导致脑转移瘤自动分割性能存在差异,对模型泛化能力构成挑战。LWF是一种有前景的点对点隐私保护模型训练方法。