Medical image processing usually requires a model trained with carefully crafted datasets due to unique image characteristics and domain-specific challenges, especially in pathology. Primitive detection and segmentation in digitized tissue samples are essential for objective and automated diagnosis and prognosis of cancer. SAM (Segment Anything Model) has recently been developed to segment general objects from natural images with high accuracy, but it requires human prompts to generate masks. In this work, we present a novel approach that adapts pre-trained natural image encoders of SAM for detection-based region proposals. Regions proposed by a pre-trained encoder are sent to cascaded feature propagation layers for projection. Then, local semantic and global context is aggregated from multi-scale for bounding box localization and classification. Finally, the SAM decoder uses the identified bounding boxes as essential prompts to generate a comprehensive primitive segmentation map. The entire base framework, SAM, requires no additional training or fine-tuning but could produce an end-to-end result for two fundamental segmentation tasks in pathology. Our method compares with state-of-the-art models in F1 score for nuclei detection and binary/multiclass panoptic(bPQ/mPQ) and mask quality(dice) for segmentation quality on the PanNuke dataset while offering end-to-end efficiency. Our model also achieves remarkable Average Precision (+4.5%) on the secondary dataset (HuBMAP Kidney) compared to Faster RCNN. The code is publicly available at https://github.com/learner-codec/autoprom_sam.
翻译:医学图像处理通常需要利用精心构建的数据集训练模型,以应对独特的图像特征和领域特定挑战,尤其在病理学中。数字化组织样本中的原始检测与分割对于癌症的客观、自动化诊断和预后至关重要。SAM(Segment Anything Model)近期被开发用于高精度分割自然图像中的通用物体,但需要人工提示来生成掩码。本研究提出一种新方法,将SAM预训练的自然图像编码器适配于基于检测的区域提议。预训练编码器提出的区域被送入级联特征传播层进行投影,随后从多尺度聚合局部语义和全局上下文,以实现边界框定位与分类。最后,SAM解码器将识别出的边界框作为关键提示,生成完整的原始分割图。整个基础框架SAM无需额外训练或微调,即可为病理学中两项基本分割任务生成端到端结果。我们的方法在PanNuke数据集上,与现有最优模型相比,在细胞核检测的F1分数、二值/多类全景分割(bPQ/mPQ)及掩码质量(dice)指标上均具竞争力,同时保持端到端效率。相较于Faster RCNN,我们的模型在辅助数据集(HuBMAP肾脏)上平均精度显著提升4.5%。代码已开源:https://github.com/learner-codec/autoprom_sam。