We study the asymptotic generalization of an overparameterized linear model for multiclass classification under the Gaussian covariates bi-level model introduced in Subramanian et al.~'22, where the number of data points, features, and classes all grow together. We fully resolve the conjecture posed in Subramanian et al.~'22, matching the predicted regimes for generalization. Furthermore, our new lower bounds are akin to an information-theoretic strong converse: they establish that the misclassification rate goes to 0 or 1 asymptotically. One surprising consequence of our tight results is that the min-norm interpolating classifier can be asymptotically suboptimal relative to noninterpolating classifiers in the regime where the min-norm interpolating regressor is known to be optimal. The key to our tight analysis is a new variant of the Hanson-Wright inequality which is broadly useful for multiclass problems with sparse labels. As an application, we show that the same type of analysis can be used to analyze the related multilabel classification problem under the same bi-level ensemble.
翻译:我们在Subramanian等人(2022)引入的高斯协变量双层模型下,研究了过参数化线性模型在多类分类中的渐近泛化性,其中数据点数量、特征维度和类别数量同时增长。我们完全解决了Subramanian等人(2022)提出的猜想,匹配了预测的泛化机制。此外,我们提出的新下界类似于信息论中的强逆定理:它们证明了误分类率渐近趋近于0或1。我们精确结果的一个惊人推论是:在已知最小范数插值回归器最优的机制中,最小范数插值分类器相对于非插值分类器可能渐近次优。我们精确分析的关键是一种新型的Hanson-Wright不等式变体,该变体对具有稀疏标签的多类问题具有广泛适用性。作为应用,我们证明了同类分析方法可用于研究相同双层集成下的相关多标签分类问题。