Language model post-training has enhanced instruction-following and performance on many downstream tasks, but also comes with an often-overlooked cost on tasks with many possible valid answers. We characterize three desiderata for conditional distributional modeling: in-context steerability, valid output space coverage, and distributional alignment, and document across three model families how current post-training can reduce these properties. In particular, we disambiguate between two kinds of in-context learning: ICL for eliciting existing underlying knowledge or capabilities, and in-context steerability, where a model must use in-context information to override its priors and steer to a novel data generating distribution. To better evaluate and improve these desiderata, we introduce Spectrum Suite, a large-scale resource compiled from >40 data sources and spanning >90 tasks requiring models to steer to and match diverse distributions ranging from varied human preferences to numerical distributions and more. We find that while current post-training techniques help elicit underlying capabilities and knowledge, they hurt models' ability to flexibly steer in-context. To mitigate these issues, we propose Spectrum Tuning, a post-training method using Spectrum Suite to improve steerability and distributional coverage. We find that Spectrum Tuning often improves over pretrained models and their instruction-tuned counterparts, enhancing steerability, spanning more of the output space, and improving distributional alignment on held-out datasets.
翻译:语言模型的后训练增强了指令跟随能力及诸多下游任务表现,但常被忽视的是,其对存在多个有效答案的任务可能产生负面影响。我们提出了条件分布建模的三个理想特性:上下文可操控性、有效输出空间覆盖度以及分布对齐,并通过三个模型族系验证了当前后训练技术可能削弱这些特性。特别地,我们区分了两种上下文学习:用于激发模型固有知识或能力的上下文学习,以及上下文可操控性——即模型必须利用上下文信息覆盖其先验分布,转向新的数据生成分布。为更好地评估和改进这些特性,我们提出了Spectrum Suite,这是一个从超过40个数据源编译的大规模资源库,涵盖超过90项任务,要求模型能够转向并匹配从多样化人类偏好到数值分布等多种分布形式。研究发现,虽然当前后训练技术有助于激发模型内在能力与知识,但会损害模型在上下文中的灵活操控能力。为缓解这些问题,我们提出频谱调优——一种基于Spectrum Suite的后训练方法,旨在提升可操控性与分布覆盖度。实验表明,频谱调优在预训练模型及其指令调优版本基础上常能取得改进,在未见数据集上展现出更强的可操控性、更广的输出空间覆盖度以及更优的分布对齐效果。