Peer prediction mechanisms motivate high-quality feedback with provable guarantees. However, current methods only apply to rather simple reports, like multiple-choice or scalar numbers. We aim to broaden these techniques to the larger domain of text-based reports, drawing on the recent developments in large language models. This vastly increases the applicability of peer prediction mechanisms as textual feedback is the norm in a large variety of feedback channels: peer reviews, e-commerce customer reviews, and comments on social media. We introduce two mechanisms, the Generative Peer Prediction Mechanism (GPPM) and the Generative Synopsis Peer Prediction Mechanism (GSPPM). These mechanisms utilize LLMs as predictors, mapping from one agent's report to a prediction of her peer's report. Theoretically, we show that when the LLM prediction is sufficiently accurate, our mechanisms can incentivize high effort and truth-telling as an (approximate) Bayesian Nash equilibrium. Empirically, we confirm the efficacy of our mechanisms through experiments conducted on two real datasets: the Yelp review dataset and the ICLR OpenReview dataset. We highlight the results that on the ICLR dataset, our mechanisms can differentiate three quality levels -- human-written reviews, GPT-4-generated reviews, and GPT-3.5-generated reviews in terms of expected scores. Additionally, GSPPM penalizes LLM-generated reviews more effectively than GPPM.
翻译:同行预测机制通过可证明的保证激励高质量反馈。然而,现有方法仅适用于较为简单的报告形式,如多项选择或标量数值。本文旨在借助大型语言模型的最新进展,将这些技术拓展至更广泛的文本报告领域。这极大扩展了同行预测机制的适用范围,因为文本反馈是众多反馈渠道的常态:同行评审、电子商务客户评价以及社交媒体评论。我们提出了两种机制:生成式同行预测机制(GPPM)与生成式摘要同行预测机制(GSPPM)。这些机制利用LLM作为预测器,将单个智能体的报告映射为其同行报告的预测。理论上,我们证明当LLM预测达到足够精度时,我们的机制能够激励高努力程度与诚实报告行为,形成(近似)贝叶斯纳什均衡。实证方面,我们在两个真实数据集(Yelp评论数据集与ICLR OpenReview数据集)上进行的实验验证了机制的有效性。我们特别指出,在ICLR数据集上,我们的机制能够根据期望得分区分三种质量层级——人工撰写的评审、GPT-4生成的评审与GPT-3.5生成的评审。此外,GSPPM机制对LLM生成评审的惩罚效果优于GPPM机制。