Scaling up model parameters has long been a prevalent training paradigm driven by the assumption that larger models yield superior generation capabilities. However, under lossy context compression in a compressor-decoder setup, we observe a Size-Fidelity Paradox: increasing the compressor size can lessen the faithfulness of reconstructed contexts though training loss decreases. Through extensive experiments across models from 0.6B to 90B, we coin this paradox arising from two dominant factors: 1) knowledge overwriting: larger models increasingly replace source facts with their own prior beliefs, e.g., ``the white strawberry'' $\to$ ``the red strawberry''; and 2) semantic drift: larger models tend to paraphrase or restructure content instead of reproducing it verbatim, e.g., ``Alice hit Bob'' $\to$ ``Bob hit Alice''. By holding model size fixed, we reflect on the emergent properties of compressed context representations. We show that the culprit is not parameter count itself, but the excessive semantic capacity and amplified generative uncertainty that accompany scaling. Specifically, the increased rank of context embeddings facilitates prior knowledge intrusion, whereas higher entropy over token prediction distributions promotes rewriting. Our results complement existing evaluations over context compression paradigm, underpinning a breakdown in scaling laws for faithful preservation in open-ended generation.
翻译:长期以来,扩大模型参数规模一直是一种主流的训练范式,其驱动力源于“模型越大,生成能力越优”的假设。然而,在压缩器-解码器架构的有损上下文压缩场景下,我们观察到一个规模-保真度悖论:增大压缩器规模反而会降低重建上下文的忠实度,尽管训练损失在减小。通过对0.6B至90B规模模型的广泛实验,我们将此悖论归因于两个主导因素:1)知识覆盖:更大规模的模型会越来越多地用其自身先验信念替换源事实,例如“白色草莓”→“红色草莓”;2)语义漂移:更大规模的模型倾向于对内容进行转述或重组,而非逐字复现,例如“Alice打了Bob”→“Bob打了Alice”。通过固定模型规模,我们深入分析了压缩上下文表征的涌现特性。研究表明,问题根源并非参数量本身,而是伴随规模扩展而过度增长的语义容量与放大的生成不确定性。具体而言,上下文嵌入秩的增加促进了先验知识的侵入,而词元预测分布熵值的提升则助长了改写行为。我们的研究成果补充了现有对上下文压缩范式的评估体系,揭示了在开放式生成任务中,忠实性保持的缩放定律存在失效现象。