Edge detection is a fundamental task in computer vision. It has made great progress under the development of deep convolutional neural networks (DCNNs), some of which have achieved a beyond human-level performance. However, recent top-performing edge detection methods tend to generate thick and noisy edge lines. In this work, we solve this problem from two aspects: (1) the lack of prior knowledge regarding image edges, and (2) the issue of imbalanced pixel distribution. We propose a second-order derivative-based multi-scale contextual enhancement module (SDMCM) to help the model locate true edge pixels accurately by introducing the edge prior knowledge. We also construct a hybrid focal loss function (HFL) to alleviate the imbalanced distribution issue. In addition, we employ the conditionally parameterized convolution (CondConv) to develop a novel boundary refinement module (BRM), which can further refine the final output edge maps. In the end, we propose a U-shape network named LUS-Net which is based on the SDMCM and BRM for crisp edge detection. We perform extensive experiments on three standard benchmarks, and the experiment results illustrate that our method can predict crisp and clean edge maps and achieves state-of-the-art performance on the BSDS500 dataset (ODS=0.829), NYUD-V2 dataset (ODS=0.768), and BIPED dataset (ODS=0.903).
翻译:边缘检测是计算机视觉中的一项基础任务。在深度卷积神经网络(DCNNs)发展的推动下,该领域已取得巨大进展,部分方法甚至实现了超越人类水平的性能。然而,近期表现最优的边缘检测方法往往生成粗厚且含有噪声的边缘线。本工作从两个方面解决此问题:(1)缺乏关于图像边缘的先验知识;(2)像素分布不平衡问题。我们提出一个基于二阶导数的多尺度上下文增强模块(SDMCM),通过引入边缘先验知识来帮助模型准确定位真实边缘像素。我们还构建了一个混合焦点损失函数(HFL)以缓解分布不平衡问题。此外,我们采用条件参数化卷积(CondConv)开发了一种新颖的边界细化模块(BRM),可进一步优化最终输出的边缘图。最终,我们提出了一种基于SDMCM和BRM的U型网络,命名为LUS-Net,用于清晰边缘检测。我们在三个标准基准数据集上进行了大量实验,实验结果表明,我们的方法能够预测清晰、干净的边缘图,并在BSDS500数据集(ODS=0.829)、NYUD-V2数据集(ODS=0.768)和BIPED数据集(ODS=0.903)上实现了最先进的性能。