While finetuning language models from pairwise preferences has proven remarkably effective, the underspecified nature of natural language presents critical challenges. Direct preference feedback is uninterpretable, difficult to provide where multidimensional criteria may apply, and often inconsistent, either because it is based on incomplete instructions or provided by diverse principals. To address these challenges, we consider the two-step preference modeling procedure that first resolves the under-specification by selecting a context, and then evaluates preference with respect to the chosen context. We decompose reward modeling error according to these two steps, which suggests that supervising context in addition to context-specific preference may be a viable approach to aligning models with diverse human preferences. For this to work, the ability of models to evaluate context-specific preference is critical. To this end, we contribute context-conditioned preference datasets and accompanying experiments that investigate the ability of language models to evaluate context-specific preference. We use our datasets to (1) show that existing preference models benefit from, but fail to fully consider, added context, (2) finetune a context-aware reward model with context-specific performance exceeding that of GPT-4 and Llama 3 70B on tested datasets, and (3) investigate the value of context-aware preference modeling.
翻译:尽管基于成对偏好对语言模型进行微调已被证明极为有效,但自然语言的未充分指定特性带来了关键挑战。直接的偏好反馈难以解释,在可能适用多维标准的情况下难以提供,并且常常不一致——这要么是因为其基于不完整的指令,要么是由不同的主体提供的。为应对这些挑战,我们考虑采用两步偏好建模流程:首先通过选择情境来解决未充分指定问题,然后依据所选情境评估偏好。我们根据这两个步骤分解奖励建模误差,这表明除了情境特定偏好外,对情境本身进行监督可能是一种使模型与多样化人类偏好对齐的可行方法。为此,模型评估情境特定偏好的能力至关重要。为此,我们贡献了情境条件偏好数据集及配套实验,以研究语言模型评估情境特定偏好的能力。我们利用数据集:(1)证明现有偏好模型能从附加情境中获益,但未能充分考虑它;(2)微调了一个情境感知奖励模型,其在情境特定性能上超过了GPT-4和Llama 3 70B在测试数据集上的表现;(3)探究了情境感知偏好建模的价值。