Skip connections are central to U-Net architectures for image denoising, but standard concatenation doubles channel dimensionality and obscures information flow, allowing uncontrolled noise transfer. We propose the Additive U-Net, which replaces concatenative skips with gated additive connections. Each skip pathway is scaled by a learnable non-negative scalar, offering explicit and interpretable control over encoder contributions while avoiding channel inflation. Evaluations on the Kodak-17 denoising benchmark show that Additive U-Net achieves competitive PSNR/SSIM at noise levels σ = 15, 25, 50, with robustness across kernel schedules and depths. Notably, effective denoising is achieved even without explicit down/up-sampling or forced hierarchies, as the model naturally learns a progression from high-frequency to band-pass to low-frequency features. These results position additive skips as a lightweight and interpretable alternative to concatenation, enabling both efficient design and a clearer understanding of multi-scale information transfer in reconstruction networks.
翻译:跳跃连接是用于图像去噪的U-Net架构的核心组件,但标准的拼接操作会使通道维度加倍并模糊信息流,导致不受控的噪声传递。我们提出加性U-Net,该模型用门控加性连接替代拼接式跳跃连接。每条跳跃路径通过一个可学习的非负标量进行缩放,从而在避免通道膨胀的同时,为编码器贡献提供显式且可解释的控制。在Kodak-17去噪基准测试上的评估表明,加性U-Net在噪声水平σ = 15、25、50下实现了具有竞争力的PSNR/SSIM指标,且在不同核调度方案和网络深度下均保持鲁棒性。值得注意的是,即使没有显式的下采样/上采样或强制层级结构,该模型仍能实现有效去噪,因为它自然地学习了从高频特征到带通特征再到低频特征的渐进过程。这些结果表明加性跳跃连接可作为拼接操作的轻量级且可解释的替代方案,既能实现高效设计,又能更清晰地理解重建网络中的多尺度信息传递机制。