Objectives: This work aims to explore the impact of multicenter data heterogeneity on deep learning brain metastases (BM) autosegmentation performance, and assess the efficacy of an incremental transfer learning technique, namely learning without forgetting (LWF), to improve model generalizability without sharing raw data. Materials and methods: A total of six BM datasets from University Hospital Erlangen (UKER), University Hospital Zurich (USZ), Stanford, UCSF, NYU and BraTS Challenge 2023 on BM segmentation were used for this evaluation. First, the multicenter performance of a convolutional neural network (DeepMedic) for BM autosegmentation was established for exclusive single-center training and for training on pooled data, respectively. Subsequently bilateral collaboration was evaluated, where a UKER pretrained model is shared to another center for further training using transfer learning (TL) either with or without LWF. Results: For single-center training, average F1 scores of BM detection range from 0.625 (NYU) to 0.876 (UKER) on respective single-center test data. Mixed multicenter training notably improves F1 scores at Stanford and NYU, with negligible improvement at other centers. When the UKER pretrained model is applied to USZ, LWF achieves a higher average F1 score (0.839) than naive TL (0.570) and single-center training (0.688) on combined UKER and USZ test data. Naive TL improves sensitivity and contouring accuracy, but compromises precision. Conversely, LWF demonstrates commendable sensitivity, precision and contouring accuracy. When applied to Stanford, similar performance was observed. Conclusion: Data heterogeneity results in varying performance in BM autosegmentation, posing challenges to model generalizability. LWF is a promising approach to peer-to-peer privacy-preserving model training.
翻译:目的:本研究旨在探讨多中心数据异质性对深度学习脑转移瘤自动分割性能的影响,并评估一种增量迁移学习技术——即"学习而不遗忘"方法——在不共享原始数据的情况下提升模型泛化能力的有效性。材料与方法:本次评估使用了来自埃尔兰根大学医院、苏黎世大学医院、斯坦福大学、加州大学旧金山分校、纽约大学以及BraTS 2023脑转移瘤分割挑战赛的共计六个脑转移瘤数据集。首先,分别评估了卷积神经网络在单中心独占数据训练和混合多中心数据训练下的脑转移瘤自动分割性能。随后评估了双边协作场景:将埃尔兰根大学医院预训练的模型共享至另一中心,分别采用常规迁移学习和结合"学习而不遗忘"的迁移学习进行进一步训练。结果:在单中心训练中,各中心测试数据上的脑转移瘤检测平均F1分数范围为0.625至0.876。混合多中心训练显著提升了斯坦福和纽约大学数据集的F1分数,其他中心改善有限。将埃尔兰根预训练模型应用于苏黎世大学医院数据时,在联合测试集上,"学习而不遗忘"方法获得了比常规迁移学习和单中心训练更高的平均F1分数。常规迁移学习虽提升了敏感度和轮廓精度,但牺牲了精确度;而"学习而不遗忘"方法在敏感度、精确度和轮廓精度方面均表现优异。在斯坦福数据集上观察到类似性能。结论:数据异质性导致脑转移瘤自动分割性能存在差异,对模型泛化构成挑战。"学习而不遗忘"是实现点对点隐私保护模型训练的有效途径。