In a real-world infrared imaging system, effectively learning a consistent stripe noise removal model is essential. Most existing destriping methods cannot precisely reconstruct images due to cross-level semantic gaps and insufficient characterization of the global column features. To tackle this problem, we propose a novel infrared image destriping method, called Asymmetric Sampling Correction Network (ASCNet), that can effectively capture global column relationships and embed them into a U-shaped framework, providing comprehensive discriminative representation and seamless semantic connectivity. Our ASCNet consists of three core elements: Residual Haar Discrete Wavelet Transform (RHDWT), Pixel Shuffle (PS), and Column Non-uniformity Correction Module (CNCM). Specifically, RHDWT is a novel downsampler that employs double-branch modeling to effectively integrate stripe-directional prior knowledge and data-driven semantic interaction to enrich the feature representation. Observing the semantic patterns crosstalk of stripe noise, PS is introduced as an upsampler to prevent excessive apriori decoding and performing semantic-bias-free image reconstruction. After each sampling, CNCM captures the column relationships in long-range dependencies. By incorporating column, spatial, and self-dependence information, CNCM well establishes a global context to distinguish stripes from the scene's vertical structures. Extensive experiments on synthetic data, real data, and infrared small target detection tasks demonstrate that the proposed method outperforms state-of-the-art single-image destriping methods both visually and quantitatively. Our code will be made publicly available at https://github.com/xdFai/ASCNet.
翻译:在实际红外成像系统中,有效学习一致的条纹噪声去除模型至关重要。现有大多数去条纹方法由于存在跨层级语义鸿沟以及对全局列特征的表征不足,无法精确重建图像。为解决此问题,我们提出了一种新颖的红外图像去条纹方法——非对称采样校正网络(ASCNet),该方法能够有效捕获全局列间关系并将其嵌入U形架构,提供全面的判别性表征与无缝的语义连通性。我们的ASCNet包含三个核心组件:残差哈尔离散小波变换(RHDWT)、像素重排(PS)以及列非均匀性校正模块(CNCM)。具体而言,RHDWT是一种新颖的下采样器,采用双分支建模来有效整合条纹方向先验知识与数据驱动的语义交互,从而丰富特征表征。针对条纹噪声的语义模式串扰现象,我们引入PS作为上采样器,以防止过度的先验解码并实现无语义偏差的图像重建。在每次采样后,CNCM通过长程依赖关系捕获列间关联。通过融合列信息、空间信息及自依赖信息,CNCM能有效建立全局上下文以区分条纹与场景的垂直结构。在合成数据、真实数据及红外小目标检测任务上的大量实验表明,所提方法在视觉效果与量化指标上均优于当前最先进的单幅图像去条纹方法。我们的代码将在 https://github.com/xdFai/ASCNet 公开。