Normative theories allow one to elicit key parts of a ML algorithm from first principles, which is crucial at a time of championed scrutiny for ML work. Direct Preference Optimization (DPO) cleverly bypasses reward modeling by making an explicit link with a specific normative model of human choice. Our paper elevates this connection to the full generality of DPO's normative framework. Getting there requires reworking human choice theory's textbook path for a better RLHF/ML fit. It elevates the connection to a remarkably broad viewpoint on preference optimization, considering the current panorama of DPO follow-ups. It also unveils unexpected riches for ML, chief among which the support for non-convex losses, the fact that any compliant ML analytical choice can be embedded with any human choice model, and a normative framework's umbrella wide enough to safeguard DPO's extensions (margins, length correction, ...). A toy experiment ``far away'' from the DPO crowd is given.
翻译:规范性理论允许人们从第一性原理推导出机器学习算法的关键部分,这在机器学习工作备受审视的时代至关重要。直接偏好优化(DPO)通过与特定的人类选择规范性模型建立显式联系,巧妙地绕过了奖励建模。本文将这种关联提升至DPO规范性框架的完全普适性。为实现这一目标,我们重构了人类选择理论的经典路径以更好地适配RLHF/机器学习范式。该研究将关联性提升至偏好优化的广阔视角,涵盖了当前DPO后续研究的全景。同时揭示了机器学习领域意想不到的丰富内涵,主要包括:支持非凸损失函数、兼容的机器学习解析选择可与任意人类选择模型嵌入、以及足以涵盖DPO扩展(边际修正、长度校正等)的规范性框架保护伞。文中还提供了一个“远离”DPO主流研究范式的简易实验。