We study a class of iterated empirical risk minimization (ERM) procedures in which two successive ERMs are performed on the same dataset, and the predictions of the first estimator enter as an argument in the loss function of the second. This setting, which arises naturally in active learning and reweighting schemes, introduces intricate statistical dependencies across samples and fundamentally distinguishes the problem from classical single-stage ERM analyses. For linear models trained with a broad class of convex losses on Gaussian mixture data, we derive a sharp asymptotic characterization of the test error in the high-dimensional regime where the sample size and ambient dimension scale proportionally. Our results provide explicit, fully asymptotic predictions for the performance of the second-stage estimator despite the reuse of data and the presence of prediction-dependent losses. We apply this theory to revisit a well-studied pool-based active learning problem, removing oracle and sample-splitting assumptions made in prior work. We uncover a fundamental tradeoff in how the labeling budget should be allocated across stages, and demonstrate a double-descent behavior of the test error driven purely by data selection, rather than model size or sample count.
翻译:我们研究一类迭代经验风险最小化(ERM)过程,其中在同一数据集上连续执行两次ERM,且第一阶段估计器的预测结果作为第二阶段损失函数的输入参数。这种设定(常见于主动学习和重加权方案)引入了样本间复杂的统计依赖性,从根本上区别于经典的单阶段ERM分析。针对高斯混合数据上采用广泛凸损失函数训练的线性模型,我们在样本量与环境维度成比例增长的高维体系中,推导出测试误差的精确渐近特征。尽管存在数据复用和预测依赖型损失函数,我们的研究结果仍能为第二阶段估计器的性能提供显式、完全渐近的预测。应用该理论,我们重新审视了经过充分研究的基于池的主动学习问题,消除了先前工作中对预言机和数据分割的假设。我们揭示了标注预算在阶段间分配的根本性权衡,并展示了完全由数据选择(而非模型规模或样本数量)驱动的测试误差双下降现象。