We study the problem of learning a single neuron under standard squared loss in the presence of arbitrary label noise and group-level distributional shifts, for a broad family of covariate distributions. Our goal is to identify a ''best-fit'' neuron parameterized by $\mathbf{w}_*$ that performs well under the most challenging reweighting of the groups. Specifically, we address a Group Distributionally Robust Optimization problem: given sample access to $K$ distinct distributions $\mathcal p_{[1]},\dots,\mathcal p_{[K]}$, we seek to approximate $\mathbf{w}_*$ that minimizes the worst-case objective over convex combinations of group distributions $\boldsymbolλ \in Δ_K$, where the objective is $\sum_{i \in [K]}λ_{[i]}\,\mathbb E_{(\mathbf x,y)\sim\mathcal p_{[i]}}(σ(\mathbf w\cdot\mathbf x)-y)^2 - νd_f(\boldsymbolλ,\frac{1}{K}\mathbf1)$ and $d_f$ is an $f$-divergence that imposes (optional) penalty on deviations from uniform group weights, scaled by a parameter $ν\geq 0$. We develop a computationally efficient primal-dual algorithm that outputs a vector $\widehat{\mathbf w}$ that is constant-factor competitive with $\mathbf{w}_*$ under the worst-case group weighting. Our analytical framework directly confronts the inherent nonconvexity of the loss function, providing robust learning guarantees in the face of arbitrary label corruptions and group-specific distributional shifts. The implementation of the dual extrapolation update motivated by our algorithmic framework shows promise on LLM pre-training benchmarks.
翻译:我们研究了在任意标签噪声和组级分布偏移存在的情况下,针对广泛的协变量分布族,在标准平方损失下学习单个神经元的问题。我们的目标是识别一个由$\mathbf{w}_*$参数化的"最佳拟合"神经元,该神经元在最具挑战性的组重加权下表现良好。具体而言,我们处理一个分组分布鲁棒优化问题:给定对$K$个不同分布$\mathcal p_{[1]},\dots,\mathcal p_{[K]}$的样本访问,我们寻求近似$\mathbf{w}_*$,该参数在组分布的凸组合$\boldsymbolλ \in Δ_K$上最小化最坏情况目标,其中目标函数为$\sum_{i \in [K]}λ_{[i]}\,\mathbb E_{(\mathbf x,y)\sim\mathcal p_{[i]}}(σ(\mathbf w\cdot\mathbf x)-y)^2 - νd_f(\boldsymbolλ,\frac{1}{K}\mathbf1)$,且$d_f$是一个$f$-散度,用于对偏离均匀组权重的程度施加(可选)惩罚,惩罚强度由参数$ν\geq 0$调节。我们开发了一种计算高效的原对偶算法,该算法输出向量$\widehat{\mathbf w}$,在最坏情况组加权下与$\mathbf{w}_*$保持常数倍竞争关系。我们的分析框架直接应对损失函数固有的非凸性,在任意标签损坏和组特定分布偏移的情况下提供鲁棒的学习保证。受我们算法框架启发的对偶外推更新实现,在大型语言模型预训练基准测试中显示出良好前景。