Breast cancer is the most common cancer type in women worldwide. Early detection and appropriate treatment can significantly reduce its impact. While histopathology examinations play a vital role in rapid and accurate diagnosis, they often require a substantial workforce and experienced medical experts for proper recognition and cancer grading. Automated image retrieval systems have the potential to assist pathologists in identifying cancerous tissues, thereby accelerating the diagnostic process. Nevertheless, due to considerable variability among the tissue and cell patterns in histological images, proposing an accurate image retrieval model is very challenging. This work introduces a novel attention-based adversarially regularized variational graph autoencoder model for breast histological image retrieval. Additionally, we incorporated cluster-guided contrastive learning as the graph feature extractor to boost the retrieval performance. We evaluated the performance of the proposed model on two publicly available datasets of breast cancer histological images and achieved superior or very competitive retrieval performance, with average mAP scores of 96.5% for the BreakHis dataset and 94.7% for the BACH dataset, and mVP scores of 91.9% and 91.3%, respectively. Our proposed retrieval model has the potential to be used in clinical settings to enhance diagnostic performance and ultimately benefit patients.
翻译:乳腺癌是全球女性中最常见的癌症类型。早期检测与适当治疗可显著降低其影响。虽然组织病理学检查在快速准确诊断中起着至关重要的作用,但通常需要大量劳动力和经验丰富的医学专家进行正确识别和癌症分级。自动化图像检索系统有望协助病理学家识别癌变组织,从而加速诊断过程。然而,由于组织学图像中组织和细胞模式存在显著差异性,构建精确的图像检索模型极具挑战性。本研究提出了一种新颖的基于注意力机制与对抗正则化的变分图自编码器模型,用于乳腺组织学图像检索。此外,我们引入聚类引导的对比学习作为图特征提取器以提升检索性能。我们在两个公开的乳腺癌组织学图像数据集上评估了所提模型的性能,取得了优异或极具竞争力的检索结果:BreakHis数据集的平均mAP得分为96.5%,BACH数据集为94.7%;mVP得分分别为91.9%和91.3%。我们提出的检索模型具备应用于临床环境以提升诊断效能并最终惠及患者的潜力。