In recent years, machine learning models have achieved success based on the independently and identically distributed assumption. However, this assumption can be easily violated in real-world applications, leading to the Out-of-Distribution (OOD) problem. Understanding how modern over-parameterized DNNs behave under non-trivial natural distributional shifts is essential, as current theoretical understanding is insufficient. Existing theoretical works often provide meaningless results for over-parameterized models in OOD scenarios or even contradict empirical findings. To this end, we are investigating the performance of the over-parameterized model in terms of OOD generalization under the general benign overfitting conditions. Our analysis focuses on a random feature model and examines non-trivial natural distributional shifts, where the benign overfitting estimators demonstrate a constant excess OOD loss, despite achieving zero excess in-distribution (ID) loss. We demonstrate that in this scenario, further increasing the model's parameterization can significantly reduce the OOD loss. Intuitively, the variance term of ID loss remains low due to orthogonality of long-tail features, meaning overfitting noise during training generally doesn't raise testing loss. However, in OOD cases, distributional shift increases the variance term. Thankfully, the inherent shift is unrelated to individual x, maintaining the orthogonality of long-tail features. Expanding the hidden dimension can additionally improve this orthogonality by mapping the features into higher-dimensional spaces, thereby reducing the variance term. We further show that model ensembles also improve OOD loss, akin to increasing model capacity. These insights explain the empirical phenomenon of enhanced OOD generalization through model ensembles, supported by consistent simulations with theoretical results.
翻译:近年来,机器学习模型基于独立同分布假设取得了成功。然而,这一假设在现实应用中容易受到破坏,从而引发分布外问题。理解现代过参数化深度神经网络在非平凡自然分布偏移下的行为至关重要,因为当前的理论理解尚不充分。现有理论工作往往对分布外场景中的过参数化模型提供无意义的结果,甚至与实证发现相矛盾。为此,我们研究了一般良性过拟合条件下过参数化模型在分布外泛化方面的表现。我们的分析聚焦于随机特征模型,并考察了非平凡的自然分布偏移——在此情况下,尽管良性过拟合估计器实现了零分布内损失,但仍表现出恒定的额外分布外损失。我们证明,在此场景中,进一步增加模型的参数量能显著降低分布外损失。直观而言,由于长尾特征的正交性,分布内损失的方差项保持较低水平,这意味着训练中对噪声的过拟合通常不会提高测试损失。然而,在分布外情况下,分布偏移增加了方差项。幸运的是,固有偏移与个体x无关,从而维持了长尾特征的正交性。扩展隐藏维度可通过将特征映射到更高维空间进一步改善这种正交性,进而降低方差项。我们进一步证明,模型集成也能改善分布外损失,其效果类似于增加模型容量。这些见解解释了通过模型集成增强分布外泛化的实证现象,且与理论结果一致的模拟实验支持了上述结论。