AI Foundation models are gaining traction in various applications, including medical fields like radiology. However, medical foundation models are often tested on limited tasks, leaving their generalisability and biases unexplored. We present RayDINO, a large visual encoder trained by self-supervision on 873k chest X-rays. We compare RayDINO to previous state-of-the-art models across nine radiology tasks, from classification and dense segmentation to text generation, and provide an in depth analysis of population, age and sex biases of our model. Our findings suggest that self-supervision allows patient-centric AI proving useful in clinical workflows and interpreting X-rays holistically. With RayDINO and small task-specific adapters, we reach state-of-the-art results and improve generalization to unseen populations while mitigating bias, illustrating the true promise of foundation models: versatility and robustness.
翻译:AI基础模型在包括放射学等医学领域的各类应用中日益受到关注。然而,医学基础模型通常仅在有限任务上进行测试,其泛化能力和偏差尚未得到充分探索。我们提出了RayDINO,这是一个在87.3万张胸部X光片上通过自监督训练的大型视觉编码器。我们将RayDINO与九个放射学任务(从分类、密集分割到文本生成)中先前最先进的模型进行比较,并深入分析了我们模型在人群、年龄和性别方面的偏差。我们的研究结果表明,自监督学习能够实现以患者为中心的AI,在临床工作流程和整体解读X光片方面具有实用价值。借助RayDINO和小的任务特定适配器,我们达到了最先进的结果,并改善了对未见人群的泛化能力,同时减轻了偏差,这体现了基础模型的真正前景:多功能性和鲁棒性。