We investigate additive skip fusion in U-Net architectures for image denoising and denoising-centric multi-task learning (MTL). By replacing concatenative skips with gated additive fusion, the proposed Additive U-Net (AddUNet) constrains shortcut capacity while preserving fixed feature dimensionality across depth. This structural regularization induces controlled encoder-decoder information flow and stabilizes joint optimization. Across single-task denoising and joint denoising-classification settings, AddUNet achieves competitive reconstruction performance with improved training stability. In MTL, learned skip weights exhibit systematic task-aware redistribution: shallow skips favor reconstruction, while deeper features support discrimination. Notably, reconstruction remains robust even under limited classification capacity, indicating implicit task decoupling through additive fusion. These findings show that simple constraints on skip connections act as an effective architectural regularizer for stable and scalable multi-task learning without increasing model complexity.
翻译:本研究探讨U-Net架构中加法跳跃融合在图像去噪及以去噪为核心的多任务学习中的应用。通过将级联式跳跃连接替换为门控加法融合,所提出的加法U-Net在保持跨深度特征维度固定的同时限制了捷径路径的容量。这种结构正则化实现了可控的编码器-解码器信息流,并稳定了联合优化过程。在单任务去噪与联合去噪-分类场景中,加法U-Net在保持竞争力的重建性能的同时提升了训练稳定性。在多任务学习中,习得的跳跃权重呈现系统性的任务感知重分布:浅层跳跃连接倾向于重建任务,而深层特征更支持判别任务。值得注意的是,即使在分类容量受限的情况下,重建性能仍保持稳健,表明加法融合通过隐式任务解耦实现功能分离。这些发现表明,对跳跃连接施加简单约束可作为有效的架构正则化方法,在不增加模型复杂度的前提下实现稳定且可扩展的多任务学习。