The universal approximation property is fundamental to the success of neural networks, and has traditionally been achieved by training networks without any constraints on their parameters. However, recent experimental research proposed a novel permutation-based training method, which exhibited a desired classification performance without modifying the exact weight values. In this paper, we provide a theoretical guarantee of this permutation training method by proving its ability to guide a ReLU network to approximate one-dimensional continuous functions. Our numerical results further validate this method's efficiency in regression tasks with various initializations. The notable observations during weight permutation suggest that permutation training can provide an innovative tool for describing network learning behavior.
翻译:通用逼近性质是神经网络成功的基础,传统上通过无约束训练网络参数实现。然而,近期实验研究提出了一种基于置换的新型训练方法,该方法在不改变精确权重值的情况下展现出理想的分类性能。本文通过证明该方法能够引导ReLU网络逼近一维连续函数,为这种置换训练方法提供了理论保证。数值结果进一步验证了该方法在不同初始化条件下回归任务中的有效性。权重置换过程中的显著现象表明,置换训练可为描述网络学习行为提供创新性工具。