In the context of reinforcement learning from human feedback (RLHF), the reward function is generally derived from maximum likelihood estimation of a random utility model based on pairwise comparisons made by humans. The problem of learning a reward function is one of preference aggregation that, we argue, largely falls within the scope of social choice theory. From this perspective, we can evaluate different aggregation methods via established axioms, examining whether these methods meet or fail well-known standards. We demonstrate that both the Bradley-Terry-Luce Model and its broad generalizations fail to meet basic axioms. In response, we develop novel rules for learning reward functions with strong axiomatic guarantees. A key innovation from the standpoint of social choice is that our problem has a linear structure, which greatly restricts the space of feasible rules and leads to a new paradigm that we call linear social choice.
翻译:在基于人类反馈的强化学习(RLHF)框架中,奖励函数通常通过对人类成对比较数据构建的随机效用模型进行最大似然估计而得到。学习奖励函数本质上是偏好聚合问题,我们认为该问题主要属于社会选择理论的研究范畴。基于这一视角,我们可以通过既有公理体系评估不同的聚合方法,检验这些方法是否符合公认标准。研究表明,Bradley-Terry-Luce模型及其广泛推广形式均无法满足基本公理要求。为此,我们提出了具有严格公理保证的奖励函数学习新规则。从社会选择理论视角看,本研究的关键创新在于揭示了该问题具有线性结构特征,这种结构极大地限制了可行规则的空间,并催生了我们称之为"线性社会选择"的新范式。