Topological correctness plays a critical role in many image segmentation tasks, yet most networks are trained using pixel-wise loss functions, such as Dice, neglecting topological accuracy. Existing topology-aware methods often lack robust topological guarantees, are limited to specific use cases, or impose high computational costs. In this work, we propose a novel, graph-based framework for topologically accurate image segmentation that is both computationally efficient and generally applicable. Our method constructs a component graph that fully encodes the topological information of both the prediction and ground truth, allowing us to efficiently identify topologically critical regions and aggregate a loss based on local neighborhood information. Furthermore, we introduce a strict topological metric capturing the homotopy equivalence between the union and intersection of prediction-label pairs. We formally prove the topological guarantees of our approach and empirically validate its effectiveness on binary and multi-class datasets. Our loss demonstrates state-of-the-art performance with up to fivefold faster loss computation compared to persistent homology methods.
翻译:在许多图像分割任务中,拓扑正确性起着关键作用,然而大多数网络使用像素级损失函数(如Dice)进行训练,忽略了拓扑准确性。现有的拓扑感知方法往往缺乏鲁棒的拓扑保证,局限于特定用例,或带来高昂的计算成本。本文提出了一种新颖的、基于图的拓扑精确图像分割框架,该方法计算高效且具有普适性。我们的方法构建了一个完全编码预测结果与真实标注拓扑信息的组件图,使我们能够高效识别拓扑关键区域,并基于局部邻域信息聚合损失。此外,我们引入了一种严格的拓扑度量,用于捕捉预测-标注对的并集与交集之间的同伦等价性。我们形式化证明了所提方法的拓扑保证,并在二值及多类数据集上实证验证了其有效性。与持续同调方法相比,我们的损失函数在损失计算速度提升高达五倍的同时,展现了最先进的性能。