Many real-world datasets contain hidden structure that cannot be detected by simple linear correlations between input features. For example, latent factors may influence the data in a coordinated way, even though their effect is invisible to covariance-based methods such as PCA. In practice, nonlinear neural networks often succeed in extracting such hidden structure in unsupervised and self-supervised learning. However, constructing a minimal high-dimensional model where this advantage can be rigorously analyzed has remained an open theoretical challenge. We introduce a tractable high-dimensional spiked model with two latent factors: one visible to covariance, and one statistically dependent yet uncorrelated, appearing only in higher-order moments. PCA and linear autoencoders fail to recover the latter, while a minimal nonlinear autoencoder provably extracts both. We analyze both the population risk, and empirical risk minimization. Our model also provides a tractable example where self-supervised test loss is poorly aligned with representation quality: nonlinear autoencoders recover latent structure that linear methods miss, even though their reconstruction loss is higher.
翻译:许多现实世界数据集包含隐藏结构,这些结构无法通过输入特征间的简单线性相关性检测。例如,潜在因子可能以协调方式影响数据,尽管其效应对于基于协方差的方法(如主成分分析)不可见。实践中,非线性神经网络在无监督和自监督学习中常能成功提取此类隐藏结构。然而,构建一个能够严格分析此优势的最小高维模型,仍是未解决的理论挑战。我们提出一种可处理的高维尖峰模型,其包含两个潜在因子:一个对协方差可见,另一个统计相关但不相关,仅出现在高阶矩中。主成分分析和线性自编码器无法恢复后者,而最小非线性自编码器可证明地提取两者。我们分析了总体风险和经验风险最小化。我们的模型还提供了一个可处理的示例,其中自监督测试损失与表示质量严重不匹配:非线性自编码器恢复了线性方法遗漏的潜在结构,尽管其重构损失更高。