Modern machine learning models are often trained in a setting where the number of parameters exceeds the number of training samples. To understand the implicit bias of gradient descent in such overparameterized models, prior work has studied diagonal linear neural networks in the regression setting. These studies have shown that, when initialized with small weights, gradient descent tends to favor solutions with minimal $\ell^1$-norm - an effect known as implicit regularization. In this paper, we investigate implicit regularization in diagonal linear neural networks of depth $D\ge 2$ for overparameterized linear regression problems. We focus on analyzing the approximation error between the limit point of gradient flow trajectories and the solution to the $\ell^1$-minimization problem. By deriving tight upper and lower bounds on the approximation error, we precisely characterize how the approximation error depends on the scale of initialization $\alpha$. Our results reveal a qualitative difference between depths: for $D \ge 3$, the error decreases linearly with $\alpha$, whereas for $D=2$, it decreases at rate $\alpha^{1-\varrho}$, where the parameter $\varrho \in [0,1)$ can be explicitly characterized. Interestingly, this parameter is closely linked to so-called null space property constants studied in the sparse recovery literature. We demonstrate the asymptotic tightness of our bounds through explicit examples. Numerical experiments corroborate our theoretical findings and suggest that deeper networks, i.e., $D \ge 3$, may lead to better generalization, particularly for realistic initialization scales.
翻译:现代机器学习模型通常在参数数量超过训练样本数量的场景下进行训练。为了理解梯度下降在此类过参数化模型中的隐式偏置,先前研究已在回归设定下考察了对角线性神经网络。这些研究表明,当以小权重初始化时,梯度下降倾向于选择具有最小ℓ¹范数的解——这一效应被称为隐式正则化。本文研究了深度为$D\\ge 2$的对角线性神经网络在过参数化线性回归问题中的隐式正则化。我们重点分析梯度流轨迹的极限点与ℓ¹最小化问题解之间的近似误差。通过推导近似误差的紧致上下界,我们精确刻画了近似误差如何依赖于初始化尺度$\\alpha$。我们的结果揭示了不同深度间的定性差异:对于$D \\ge 3$,误差随$\\alpha$线性下降;而对于$D=2$,误差以$\\alpha^{1-\\varrho}$的速率下降,其中参数$\\varrho \\in [0,1)$可被显式刻画。有趣的是,该参数与稀疏恢复文献中研究的所谓零空间性质常数密切相关。我们通过具体示例证明了所获边界的渐近紧致性。数值实验验证了我们的理论发现,并表明更深的网络(即$D \\ge 3$)可能带来更好的泛化性能,尤其是在实际初始化尺度下。