Trained models are often composed with post-hoc transforms such as temperature scaling (TS), ensembling and stochastic weight averaging (SWA) to improve performance, robustness, uncertainty estimation, etc. However, such transforms are typically applied only after the base models have already been finalized by standard means. In this paper, we challenge this practice with an extensive empirical study. In particular, we demonstrate a phenomenon that we call post-hoc reversal, where performance trends are reversed after applying post-hoc transforms. This phenomenon is especially prominent in high-noise settings. For example, while base models overfit badly early in training, both ensembling and SWA favor base models trained for more epochs. Post-hoc reversal can also prevent the appearance of double descent and mitigate mismatches between test loss and test error seen in base models. Preliminary analyses suggest that these transforms induce reversal by suppressing the influence of mislabeled examples, exploiting differences in their learning dynamics from those of clean examples. Based on our findings, we propose post-hoc selection, a simple technique whereby post-hoc metrics inform model development decisions such as early stopping, checkpointing, and broader hyperparameter choices. Our experiments span real-world vision, language, tabular and graph datasets. On an LLM instruction tuning dataset, post-hoc selection results in >1.5x MMLU improvement compared to naive selection.
翻译:训练好的模型通常与事后变换(如温度缩放、集成学习和随机权重平均)结合使用,以提升性能、鲁棒性、不确定性估计等。然而,这类变换通常仅在基础模型已通过标准方法最终确定后才被应用。在本文中,我们通过广泛的实证研究对这一做法提出挑战。具体而言,我们展示了一种称为“事后逆转”的现象,即在应用事后变换后,性能趋势会发生逆转。这一现象在高噪声设置下尤为显著。例如,虽然基础模型在训练早期严重过拟合,但集成学习和随机权重平均均倾向于训练更多轮次的基础模型。事后逆转还能防止双下降现象的出现,并缓解基础模型中测试损失与测试误差之间的不匹配。初步分析表明,这些变换通过抑制误标注样本的影响,并利用其与干净样本在学习动态上的差异来引发逆转。基于我们的发现,我们提出了事后选择法,这是一种简单的技术,通过事后指标来指导模型开发决策,如早停、检查点保存以及更广泛的超参数选择。我们的实验涵盖了真实世界的视觉、语言、表格和图数据集。在一个大语言模型指令微调数据集上,事后选择法相比朴素选择带来了超过1.5倍的MMLU性能提升。