We introduce a method to measure the alignment between public will and language model (LM) behavior that can be applied to fine-tuning, online oversight, and pre-release safety checks. Our `chain of alignment' (CoA) approach produces a rule based reward (RBR) by creating model behavior $\textit{rules}$ aligned to normative $\textit{objectives}$ aligned to $\textit{public will}$. This factoring enables a nonexpert public to directly specify their will through the normative objectives, while expert intelligence is used to figure out rules entailing model behavior that best achieves those objectives. We validate our approach by applying it across three different domains of LM prompts related to mental health. We demonstrate a public input process built on collective dialogues and bridging-based ranking that reliably produces normative objectives supported by at least $96\% \pm 2\%$ of the US public. We then show that rules developed by mental health experts to achieve those objectives enable a RBR that evaluates an LM response's alignment with the objectives similarly to human experts (Pearson's $r=0.841$, $AUC=0.964$). By measuring alignment with objectives that have near unanimous public support, these CoA RBRs provide an approximate measure of alignment between LM behavior and public will.
翻译:我们提出了一种衡量公众意愿与语言模型行为对齐程度的方法,该方法可应用于微调、在线监督及发布前安全检查。我们的"链式对齐"方法通过构建与规范性目标对齐的模型行为规则,进而生成基于规则的奖励函数,其中规范性目标本身与公众意愿保持一致。这种分解方式使得非专业公众能够直接通过规范性目标表达其意愿,同时利用专家智能推导出最能实现这些目标的模型行为规则。我们在心理健康相关的三个不同语言模型提示领域验证了该方法。我们展示了一种基于集体对话与桥接排序的公众意见征集流程,该流程能稳定产生获得至少96%±2%美国公众支持的规范性目标。随后我们证明,心理健康专家为实现这些目标制定的规则所构建的基于规则的奖励函数,在评估语言模型响应与目标的对齐程度上与人类专家表现相当(皮尔逊相关系数r=0.841,曲线下面积AUC=0.964)。通过测量模型与获得近乎全体公众支持的目标之间的对齐度,这些链式对齐的基于规则的奖励函数为语言模型行为与公众意愿的对齐程度提供了近似度量。