Digital whole slide images (WSIs) are generally captured at microscopic resolution and encompass extensive spatial data. Directly feeding these images to deep learning models is computationally intractable due to memory constraints, while downsampling the WSIs risks incurring information loss. Alternatively, splitting the WSIs into smaller patches may result in a loss of important contextual information. In this paper, we propose a novel dual attention approach, consisting of two main components, both inspired by the visual examination process of a pathologist: The first soft attention model processes a low magnification view of the WSI to identify relevant regions of interest, followed by a custom sampling method to extract diverse and spatially distinct image tiles from the selected ROIs. The second component, the hard attention classification model further extracts a sequence of multi-resolution glimpses from each tile for classification. Since hard attention is non-differentiable, we train this component using reinforcement learning to predict the location of the glimpses. This approach allows the model to focus on essential regions instead of processing the entire tile, thereby aligning with a pathologist's way of diagnosis. The two components are trained in an end-to-end fashion using a joint loss function to demonstrate the efficacy of the model. The proposed model was evaluated on two WSI-level classification problems: Human epidermal growth factor receptor 2 scoring on breast cancer histology images and prediction of Intact/Loss status of two Mismatch Repair biomarkers from colorectal cancer histology images. We show that the proposed model achieves performance better than or comparable to the state-of-the-art methods while processing less than 10% of the WSI at the highest magnification and reducing the time required to infer the WSI-level label by more than 75%.
翻译:数字全切片图像通常以显微分辨率采集,包含海量空间数据。由于内存限制,直接将此类图像输入深度学习模型在计算上不可行,而对全切片图像进行下采样则可能导致信息损失。另一种方法是将全切片图像分割为较小图块,但这可能丢失重要的上下文信息。本文提出一种新颖的双注意力方法,该方法包含两个主要组件,均受病理学家视觉检查过程的启发:第一个软注意力模型处理低倍放大的全切片图像视图以识别相关感兴趣区域,随后通过定制采样方法从选定的感兴趣区域中提取多样化且空间分布离散的图像区块。第二个组件——硬注意力分类模型——进一步从每个区块中提取多分辨率瞥视序列进行分类。由于硬注意力不可微分,我们使用强化学习训练该组件以预测瞥视位置。这种方法使模型能够聚焦于关键区域而非处理整个区块,从而与病理学家的诊断方式保持一致。两个组件通过联合损失函数以端到端方式进行训练,以验证模型的有效性。所提模型在两个全切片图像级别分类任务上进行了评估:乳腺癌组织学图像的人类表皮生长因子受体2评分,以及结直肠癌组织学图像中两种错配修复生物标志物的完整/缺失状态预测。实验表明,所提模型在仅处理最高放大倍数下不足10%的全切片图像区域、并将全切片图像级别标签推断时间减少75%以上的同时,取得了优于或媲美现有先进方法的性能。