Post-training improves instruction-following and helpfulness of large language models (LLMs) but often reduces generation diversity, which leads to repetitive outputs in open-ended settings, a phenomenon known as mode collapse. Motivated by evidence that LLM layers play distinct functional roles, we hypothesize that mode collapse can be localized to specific layers and that restoring a carefully chosen range of layers to their pre-trained weights can recover diversity while maintaining high output quality. To validate this hypothesis and decide which layers to restore, we design a proxy task -- Constrained Random Character(CRC) -- with an explicit validity set and a natural diversity objective. Results on CRC reveal a clear diversity-validity trade-off across restoration ranges and identify configurations that increase diversity with minimal quality loss. Based on these findings, we propose Selective Layer Restoration (SLR), a training-free method that restores selected layers in a post-trained model to their pre-trained weights, yielding a hybrid model with the same architecture and parameter count, incurring no additional inference cost. Across three different tasks (creative writing, open-ended question answering, and multi-step reasoning) and three different model families (Llama, Qwen, and Gemma), we find SLR can consistently and substantially improve output diversity while maintaining high output quality.
翻译:后训练能提升大语言模型(LLM)的指令遵循能力和实用性,但往往会降低生成多样性,导致开放场景中输出内容重复,这种现象称为模式崩溃。鉴于有证据表明LLM各层承担不同的功能角色,我们假设模式崩溃可定位至特定层级,且将精心选择的层范围恢复至预训练权重,可在保持高质量输出的同时恢复多样性。为验证该假设并确定需恢复的层,我们设计了一个代理任务——约束随机字符生成(CRC),该任务具有明确的合法性判定集与自然的多样性目标。CRC实验结果表明,不同恢复范围存在清晰的多样性-合法性权衡关系,并识别出能以最小质量损失提升多样性的配置方案。基于这些发现,我们提出选择性层恢复(SLR)方法,这种免训练技术可将后训练模型中选定层恢复至预训练权重,从而生成架构相同、参数量不变的混合模型,且不产生额外推理成本。在三种不同任务(创意写作、开放域问答、多步推理)和三种模型系列(Llama、Qwen、Gemma)上的实验表明,SLR能持续显著提升输出多样性,同时保持高质量输出。