Medical images and reports offer invaluable insights into patient health. The heterogeneity and complexity of these data hinder effective analysis. To bridge this gap, we investigate contrastive learning models for cross-domain retrieval, which associates medical images with their corresponding clinical reports. This study benchmarks the robustness of four state-of-the-art contrastive learning models: CLIP, CXR-RePaiR, MedCLIP, and CXR-CLIP. We introduce an occlusion retrieval task to evaluate model performance under varying levels of image corruption. Our findings reveal that all evaluated models are highly sensitive to out-of-distribution data, as evidenced by the proportional decrease in performance with increasing occlusion levels. While MedCLIP exhibits slightly more robustness, its overall performance remains significantly behind CXR-CLIP and CXR-RePaiR. CLIP, trained on a general-purpose dataset, struggles with medical image-report retrieval, highlighting the importance of domain-specific training data. The evaluation of this work suggests that more effort needs to be spent on improving the robustness of these models. By addressing these limitations, we can develop more reliable cross-domain retrieval models for medical applications.
翻译:医学影像与临床报告为患者健康状况提供了宝贵洞见。然而,这些数据的异质性与复杂性阻碍了有效分析。为弥合这一鸿沟,本研究探索了用于跨域检索的对比学习模型,旨在建立医学影像与其对应临床报告之间的关联。本文对四种前沿对比学习模型——CLIP、CXR-RePaiR、MedCLIP及CXR-CLIP——进行了鲁棒性基准测试。我们设计了遮挡检索任务以评估模型在不同程度图像损坏下的性能表现。实验结果表明,所有被评估模型对分布外数据均表现出高度敏感性,其性能随遮挡程度增加呈比例下降即为明证。尽管MedCLIP展现出稍强的鲁棒性,但其整体性能仍显著落后于CXR-CLIP和CXR-RePaiR。基于通用数据集训练的CLIP在医学影像-报告检索任务中表现欠佳,这凸显了领域专用训练数据的重要性。本研究的评估表明,未来需投入更多努力提升此类模型的鲁棒性。通过解决这些局限性,我们有望开发出更可靠的医学跨域检索模型。