Despite significant advancements, segmentation based on deep neural networks in medical and surgical imaging faces several challenges, two of which we aim to address in this work. First, acquiring complete pixel-level segmentation labels for medical images is time-consuming and requires domain expertise. Second, typical segmentation pipelines cannot detect out-of-distribution (OOD) pixels, leaving them prone to spurious outputs during deployment. In this work, we propose a novel segmentation approach exploiting OOD detection that learns only from sparsely annotated pixels from multiple positive-only classes. %but \emph{no background class} annotation. These multi-class positive annotations naturally fall within the in-distribution (ID) set. Unlabelled pixels may contain positive classes but also negative ones, including what is typically referred to as \emph{background} in standard segmentation formulations. Here, we forgo the need for background annotation and consider these together with any other unseen classes as part of the OOD set. Our framework can integrate, at a pixel-level, any OOD detection approaches designed for classification tasks. To address the lack of existing OOD datasets and established evaluation metric for medical image segmentation, we propose a cross-validation strategy that treats held-out labelled classes as OOD. Extensive experiments on both multi-class hyperspectral and RGB surgical imaging datasets demonstrate the robustness and generalisation capability of our proposed framework.
翻译:尽管取得了显著进展,基于深度神经网络的医学与手术影像分割仍面临诸多挑战,本文旨在解决其中两个关键问题。首先,获取医学图像完整的像素级分割标注耗时且需要领域专业知识。其次,典型分割流程无法检测分布外(OOD)像素,导致在部署时易产生虚假输出。本文提出一种利用OOD检测的新型分割方法,该方法仅从多类正样本的稀疏标注像素中学习。这些多类正标注自然属于分布内(ID)集合。未标注像素可能包含正类样本,也可能包含负类样本——包括标准分割框架中通常称为"背景"的类别。在此,我们摒弃了对背景标注的需求,将此类像素与任何其他未见类别共同视为OOD集合的组成部分。我们的框架能够在像素级别集成任何为分类任务设计的OOD检测方法。针对现有医学图像分割领域缺乏OOD数据集和标准评估指标的问题,我们提出一种交叉验证策略,将预留的标注类别视为OOD样本。在多类高光谱和RGB手术影像数据集上的大量实验表明,所提框架具有优异的鲁棒性和泛化能力。