Radiologists rely on anatomical understanding to accurately delineate pathologies, yet most current deep learning approaches use pure pattern recognition and ignore the anatomical context in which pathologies develop. To narrow this gap, we introduce GRASP (Guided Representation Alignment for the Segmentation of Pathologies), a modular plug-and-play framework that enhances pathology segmentation models by leveraging existing anatomy segmentation models through pseudolabel integration and feature alignment. Unlike previous approaches that obtain anatomical knowledge via auxiliary training, GRASP integrates into standard pathology optimization regimes without retraining anatomical components. We evaluate GRASP on two PET/CT datasets, conduct systematic ablation studies, and investigate the framework's inner workings. We find that GRASP consistently achieves top rankings across multiple evaluation metrics and diverse architectures. The framework's dual anatomy injection strategy, combining anatomical pseudo-labels as input channels with transformer-guided anatomical feature fusion, effectively incorporates anatomical context.
翻译:放射科医师依赖解剖学知识来精确勾画病理区域,然而当前大多数深度学习方法仅采用纯模式识别策略,忽略了病理发展所处的解剖学背景。为弥合这一差距,我们提出GRASP(面向病理分割的引导式表征对齐框架),该模块化即插即用框架通过伪标签整合与特征对齐机制,利用现有解剖分割模型来增强病理分割模型的性能。与以往通过辅助训练获取解剖知识的方法不同,GRASP可直接融入标准病理优化流程而无需重新训练解剖组件。我们在两个PET/CT数据集上评估GRASP,开展系统性消融实验,并探究框架的内部工作机制。实验表明GRASP在多种评估指标和不同网络架构中均能保持领先性能。该框架采用双重解剖信息注入策略——将解剖伪标签作为输入通道与基于Transformer引导的解剖特征融合相结合——有效整合了解剖学上下文信息。