Recent accelerations in multi-modal applications have been made possible with the plethora of image and text data available online. However, the scarcity of analogous data in the medical field, specifically in histopathology, has slowed comparable progress. To enable similar representation learning for histopathology, we turn to YouTube, an untapped resource of videos, offering $1,087$ hours of valuable educational histopathology videos from expert clinicians. From YouTube, we curate QUILT: a large-scale vision-language dataset consisting of $802, 144$ image and text pairs. QUILT was automatically curated using a mixture of models, including large language models, handcrafted algorithms, human knowledge databases, and automatic speech recognition. In comparison, the most comprehensive datasets curated for histopathology amass only around $200$K samples. We combine QUILT with datasets from other sources, including Twitter, research papers, and the internet in general, to create an even larger dataset: QUILT-1M, with $1$M paired image-text samples, marking it as the largest vision-language histopathology dataset to date. We demonstrate the value of QUILT-1M by fine-tuning a pre-trained CLIP model. Our model outperforms state-of-the-art models on both zero-shot and linear probing tasks for classifying new histopathology images across $13$ diverse patch-level datasets of $8$ different sub-pathologies and cross-modal retrieval tasks.
翻译:近年来,多模态应用的加速发展得益于网络上大量可用的图像与文本数据。然而,在医学领域,特别是病理组织学中,类似数据的稀缺阻碍了可比较的进展。为了在病理组织学中实现类似的表征学习,我们将目光转向YouTube——一个尚未被充分利用的视频资源库,其中包含了来自临床专家提供的1,087小时有价值的教学性病理组织学视频。我们从YouTube中筛选并构建了QUILT:一个大规模视觉-语言数据集,包含802,144个图像与文本对。QUILT的构建过程采用了多种模型的自动筛选方法,包括大型语言模型、手工设计的算法、人类知识数据库以及自动语音识别技术。相比之下,此前为病理组织学构建的最全面数据集仅积累了约20万个样本。我们将QUILT与其他来源(包括Twitter、研究论文及一般互联网)的数据集相结合,创建了一个更大的数据集:QUILT-1M,包含100万个配对的图像-文本样本,这使其成为迄今为止规模最大的病理组织学视觉-语言数据集。我们通过微调一个预训练的CLIP模型来展示QUILT-1M的价值。我们的模型在13个不同的、涵盖8种亚病理类型的组织切片数据集上,对新病理组织学图像进行分类的零样本学习和线性探测任务,以及在跨模态检索任务中,均优于现有最先进的模型。