Recent advancements in computational pathology have produced patch-level Multi-modal Large Language Models (MLLMs), but these models are limited by their inability to analyze whole slide images (WSIs) comprehensively and their tendency to bypass crucial morphological features that pathologists rely on for diagnosis. To address these challenges, we first introduce WSI-Bench, a large-scale morphology-aware benchmark containing 180k VQA pairs from 9,850 WSIs across 30 cancer types, designed to evaluate MLLMs' understanding of morphological characteristics crucial for accurate diagnosis. Building upon this benchmark, we present WSI-LLaVA, a novel framework for gigapixel WSI understanding that employs a three-stage training approach: WSI-text alignment, feature space alignment, and task-specific instruction tuning. To better assess model performance in pathological contexts, we develop two specialized WSI metrics: WSI-Precision and WSI-Relevance. Experimental results demonstrate that WSI-LLaVA outperforms existing models across all capability dimensions, with a significant improvement in morphological analysis, establishing a clear correlation between morphological understanding and diagnostic accuracy.
翻译:计算病理学的最新进展催生了斑块级的多模态大语言模型,但这些模型存在局限性:无法全面分析全切片图像,且倾向于绕过病理学家诊断所依赖的关键形态学特征。为应对这些挑战,我们首先引入了WSI-Bench,这是一个大规模、具有形态学感知能力的基准测试集,包含来自30种癌症类型、9,850张全切片图像的18万个视觉问答对,旨在评估多模态大语言模型对准确诊断至关重要的形态学特征的理解能力。基于此基准,我们提出了WSI-LLaVA,一个用于千兆像素级全切片图像理解的新框架。该框架采用三阶段训练方法:WSI-文本对齐、特征空间对齐和任务特定的指令微调。为了更好地评估模型在病理学背景下的性能,我们开发了两个专门的全切片图像评估指标:WSI-精确度和WSI-相关性。实验结果表明,WSI-LLaVA在所有能力维度上均优于现有模型,在形态学分析方面有显著提升,并确立了形态学理解与诊断准确性之间的明确相关性。