We study kernel regression with common rotation-invariant kernels on real datasets including CIFAR-5m, SVHN, and ImageNet. We give a theoretical framework that predicts learning curves (test risk vs. sample size) from only two measurements: the empirical data covariance matrix and an empirical polynomial decomposition of the target function $f_*$. The key new idea is an analytical approximation of a kernel's eigenvalues and eigenfunctions with respect to an anisotropic data distribution. The eigenfunctions resemble Hermite polynomials of the data, so we call this approximation the Hermite eigenstructure ansatz (HEA). We prove the HEA for Gaussian data, but we find that real image data is often "Gaussian enough" for the HEA to hold well in practice, enabling us to predict learning curves by applying prior results relating kernel eigenstructure to test risk. Extending beyond kernel regression, we empirically find that MLPs in the feature-learning regime learn Hermite polynomials in the order predicted by the HEA. Our HEA framework is a proof of concept that an end-to-end theory of learning which maps dataset structure all the way to model performance is possible for nontrivial learning algorithms on real datasets.
翻译:我们研究了在包括CIFAR-5m、SVHN和ImageNet在内的真实数据集上使用常见旋转不变核的核回归问题。我们提出了一个理论框架,该框架仅通过两个测量值即可预测学习曲线(测试风险与样本量的关系):经验数据协方差矩阵和目标函数$f_*$的经验多项式分解。关键的新思想是核的特征值和特征函数相对于各向异性数据分布的解析近似。这些特征函数类似于数据的厄米多项式,因此我们称此近似为厄米特征结构拟设(HEA)。我们证明了HEA对于高斯数据成立,但发现真实图像数据在实践中往往"足够高斯",使得HEA能够良好成立,从而通过应用先前关于核特征结构与测试风险关系的研究结果来预测学习曲线。超越核回归的范畴,我们通过实证发现,处于特征学习机制下的多层感知器(MLP)会按照HEA预测的顺序学习厄米多项式。我们的HEA框架是一个概念验证,表明对于真实数据集上的非平凡学习算法,构建一个从数据集结构到模型性能的端到端学习理论是可能的。