Neural networks with positively homogeneous activations exhibit an exact continuous reparametrization symmetry: neuron-wise rescalings generate parameter-space orbits along which the input--output function is invariant. We interpret this symmetry as a gauge redundancy and introduce gauge-adapted coordinates that separate invariant and scale-imbalance directions. Inspired by gauge fixing in field theory, we introduce a soft orbit-selection (norm-balancing) functional acting only on redundant scale coordinates. We show analytically that it induces dissipative relaxation of imbalance modes to preserve the realized function. In controlled experiments, this orbit-selection penalty expands the stable learning-rate regime and suppresses scale drift without changing expressivity. These results establish a structural link between gauge-orbit geometry and optimization conditioning, providing a concrete connection between gauge-theoretic concepts and machine learning.
翻译:具有正齐次激活函数的神经网络表现出精确的连续重参数化对称性:神经元级别的缩放会在参数空间中生成轨道,而输入-输出函数沿这些轨道保持不变。我们将此对称性解释为一种规范冗余,并引入适应规范的坐标以分离不变方向与尺度失衡方向。受场论中规范固定的启发,我们引入一种仅作用于冗余尺度坐标的软轨道选择(范数平衡)泛函。我们通过解析证明,该泛函会诱导失衡模式的耗散弛豫,从而保持已实现的函数不变。在受控实验中,这种轨道选择惩罚扩展了稳定学习率的范围,并在不改变表达能力的条件下抑制了尺度漂移。这些结果建立了规范轨道几何结构与优化条件之间的结构性联系,为规范理论概念与机器学习提供了具体关联。