Medical Image Analysis (MedIA) has become indispensable in modern healthcare, enhancing clinical diagnostics and personalized treatment. Despite the remarkable advancements supported by deep learning (DL) technologies, their practical deployment faces challenges due to distribution shifts, where models trained on specific datasets underperform across others from varying hospitals, regions, or patient populations. To navigate this issue, researchers have been actively developing strategies to increase the adaptability and robustness of DL models, enabling their effective use in unfamiliar and diverse environments. This paper systematically reviews approaches that apply DL techniques to MedIA systems affected by distribution shifts. Unlike traditional categorizations based on technical specifications, our approach is grounded in the real-world operational constraints faced by healthcare institutions. Specifically, we categorize the existing body of work into Joint Training, Federated Learning, Fine-tuning, and Domain Generalization, with each method tailored to distinct scenarios caused by Data Accessibility, Privacy Concerns, and Collaborative Protocols. This perspective equips researchers with a nuanced understanding of how DL can be strategically deployed to address distribution shifts in MedIA, ensuring diverse and robust medical applications. By delving deeper into these topics, we highlight potential pathways for future research that not only address existing limitations but also push the boundaries of deployable MedIA technologies.
翻译:医学影像分析(MedIA)在现代医疗中已成为不可或缺的工具,显著提升了临床诊断与个性化治疗水平。尽管深度学习技术推动了该领域的显著进步,但其实际部署仍面临分布偏移带来的挑战:在特定数据集上训练的模型,在不同医院、地区或患者群体数据上表现不佳。为解决这一问题,研究者们正积极开发策略以增强深度学习模型的适应性与鲁棒性,使其能在陌生且多样化的环境中有效应用。本文系统综述了应用于受分布偏移影响的MedIA系统的深度学习技术方法。与基于技术指标的传统分类方式不同,本文立足于医疗机构在现实世界中面临的实际操作限制,将现有工作归纳为联合训练、联邦学习、微调与领域泛化四大类,每种方法分别针对数据可访问性、隐私问题及协作协议等不同场景需求而设计。这一视角为研究者提供了战略部署深度学习以应对MedIA分布偏移的细致理解,从而确保医疗应用的多样性与鲁棒性。通过深入探讨这些主题,本文进一步指明了未来研究的潜在路径,不仅着眼于突破现有局限,更致力于拓展可部署MedIA技术的边界。