Recent research on fine-tuning large language models (LLMs) through the aggregation of multiple preferences has attracted considerable attention. However, the existing literature predominantly focuses on the empirical performance of aggregation algorithms, while neglecting the underlying motivation for agents to misreport their preferences. In this paper, we formalize this as a multi-parameter mechanism design problem, where an LLM provider designs both training and payment rules to achieve specific objectives and promote the truthful reporting of preferences. Firstly, we claim the necessity of a payment scheme by demonstrating that without payments, truth-telling is a strictly dominated strategy under a wide range of training rules. Then, we introduce the affine maximizer payment scheme for the social welfare maximizing training rules that are widely used in practice, which ensures both dominant-strategy incentive compatibility (DSIC) and individual rationality (IR). Furthermore, we prove that under mild conditions, any other payment rule that also implements these training rules in DSIC can be converted to the affine maximizer payment by adding a factor irrelevant to the agents' own reports. We also show that this mechanism satisfies approximate DSIC when the input of the mechanism is a biased version of the reported preferences, showcasing its robustness in real-world applications.
翻译:近年来,通过聚合多方偏好来微调大型语言模型的研究引起了广泛关注。然而,现有文献主要集中于聚合算法的实证性能,而忽视了智能体误报其偏好的内在动机。本文将这一问题形式化为一个多参数机制设计问题,其中LLM提供者需同时设计训练规则与支付规则,以实现特定目标并促进偏好的真实报告。首先,我们通过论证表明:在缺乏支付机制的情况下,对于广泛采用的训练规则而言,如实报告是严格劣势策略,从而证明了支付方案的必要性。接着,我们针对实践中广泛使用的社会福利最大化训练规则,引入了仿射最大化支付方案,该方案同时满足占优策略激励相容性与个体理性。此外,我们证明在温和条件下,任何其他能实现这些训练规则占优策略激励相容的支付规则,均可通过添加与智能体自身报告无关的因子转化为仿射最大化支付。我们还证明了当机制输入为报告偏好的有偏版本时,该机制仍满足近似占优策略激励相容性,这展现了其在现实应用中的鲁棒性。