The "alignment tax" of post-training is typically framed as a drop in task accuracy. We show it also involves a severe loss of calibration, making models overconfident, less reliable, and model outputs less diverse. We show that this trade-off can be navigated effectively via a simple post-hoc intervention: interpolating between a model's weights before and after alignment. Crucially, this is not a strict trade-off. We find that the process consistently reveals Pareto-optimal interpolations - models that improve accuracy beyond both parents while substantially recovering the calibration lost during alignment. Our work demonstrates that simple model merging provides a computationally efficient method for mitigating the full scope of the alignment tax, yielding models that are more capable and more reliable.
翻译:后训练中的“对齐代价”通常被描述为任务准确性的下降。我们证明它还涉及严重的校准损失,导致模型过于自信、可靠性降低,且模型输出多样性减少。我们表明,通过一种简单的后处理干预——在模型对齐前后的权重之间进行插值——可以有效应对这一权衡。关键的是,这并非严格的权衡。我们发现,该过程持续揭示出帕累托最优插值模型:这些模型在超越双亲模型准确性的同时,显著恢复了对齐过程中丢失的校准性能。我们的工作表明,简单的模型合并提供了一种计算高效的方法,以缓解对齐代价的全面影响,从而产生能力更强、可靠性更高的模型。