One-shot segmentation of brain tissue requires training registration-segmentation (reg-seg) dual-model iteratively, where reg-model aims to provide pseudo masks of unlabeled images for seg-model by warping a carefully-labeled atlas. However, the imperfect reg-model induces image-mask misalignment, poisoning the seg-model subsequently. Recent StyleSeg bypasses this bottleneck by replacing the unlabeled images with their warped copies of atlas, but needs to borrow the diverse image patterns via style transformation. Here, we present StyleSeg V2, inherited from StyleSeg but granted the ability of perceiving the registration errors. The motivation is that good registration behaves in a mirrored fashion for mirrored images. Therefore, almost at no cost, StyleSeg V2 can have reg-model itself "speak out" incorrectly-aligned regions by simply mirroring (symmetrically flipping the brain) its input, and the registration errors are symmetric inconsistencies between the outputs of original and mirrored inputs. Consequently, StyleSeg V2 allows the seg-model to make use of correctly-aligned regions of unlabeled images and also enhances the fidelity of style-transformed warped atlas image by weighting the local transformation strength according to registration errors. The experimental results on three public datasets demonstrate that our proposed StyleSeg V2 outperforms other state-of-the-arts by considerable margins, and exceeds StyleSeg by increasing the average Dice by at least 2.4%.
翻译:一次性脑组织分割需要迭代训练配准-分割双模型,其中配准模型通过变形精心标注的图谱为分割模型提供未标注图像的伪掩膜。然而,不完善的配准模型会导致图像-掩膜错位,进而污染分割模型。近期提出的StyleSeg通过用变形后的图谱副本替代未标注图像规避这一瓶颈,但需借助风格变换借用多样化的图像模式。本文提出StyleSeg V2,它继承自StyleSeg但被赋予感知配准误差的能力。其动机在于:优质配准对镜像图像应呈现镜像对称行为。因此,StyleSeg V2几乎无需额外成本即可让配准模型通过简单镜像操作(对称翻转脑部图像)自动"揭示"错位区域——配准误差即原始输入与镜像输入输出之间的对称不一致性。由此,StyleSeg V2既允许分割模型利用未标注图像的正确配准区域,又能根据配准误差对局部变换强度进行加权,增强风格变换后变形图谱图像的保真度。在三个公开数据集上的实验结果表明,本文提出的StyleSeg V2以显著优势超越现有最优方法,相较StyleSeg平均Dice系数提升至少2.4%。