Vision-language pre-training, i.e., aligning images with paired text, is a powerful paradigm to create encoders that can be directly used for tasks such as classification, retrieval, and segmentation. In the 3D medical image domain, these capabilities allow vision-language encoders (VLEs) to support radiologists by retrieving patients with similar abnormalities, predicting likelihoods of abnormality, or, with downstream adaptation, generating radiological reports. While the methodology holds promise, data availability and domain-specific hurdles limit the capabilities of current 3D VLEs. In this paper, we overcome these challenges by injecting additional supervision via a report generation objective and combining vision-language with vision-only pre-training. This allows us to leverage both image-only and paired image-text 3D datasets, increasing the total amount of data to which our model is exposed. Through these additional objectives, paired with best practices of the 3D medical imaging domain, we develop the Comprehensive Language-Image Pre-training (COLIPRI) encoder family. Our COLIPRI encoders achieve state-of-the-art performance in report generation, semantic segmentation, classification probing, and zero-shot classification. The model is available at https://huggingface.co/microsoft/colipri.
翻译:视觉-语言预训练(即对齐图像与配对文本)是一种强大的范式,可用于创建能直接应用于分类、检索和分割等任务的编码器。在三维医学图像领域,这些能力使得视觉-语言编码器能够通过检索具有相似异常的患者、预测异常可能性,或通过下游适配生成放射学报告来辅助放射科医师。尽管该方法前景广阔,但数据可用性和领域特定障碍限制了当前三维视觉-语言编码器的能力。本文通过引入报告生成目标的额外监督,并将视觉-语言预训练与纯视觉预训练相结合,克服了这些挑战。这使得我们能够同时利用纯图像数据集和配对图像-文本三维数据集,从而增加模型接触的数据总量。通过这些附加目标,并结合三维医学成像领域的最佳实践,我们开发了综合语言-图像预训练编码器系列。我们的COLIPRI编码器在报告生成、语义分割、分类探测和零样本分类任务中均取得了最先进的性能。模型发布于https://huggingface.co/microsoft/colipri。