In the post-Turing era, evaluating large language models (LLMs) involves assessing generated text based on readers' reactions rather than merely its indistinguishability from human-produced content. This paper explores how LLM-generated text impacts readers' decisions, focusing on both amateur and expert audiences. Our findings indicate that GPT-4 can generate persuasive analyses affecting the decisions of both amateurs and professionals. Furthermore, we evaluate the generated text from the aspects of grammar, convincingness, logical coherence, and usefulness. The results highlight a high correlation between real-world evaluation through audience reactions and the current multi-dimensional evaluators commonly used for generative models. Overall, this paper shows the potential and risk of using generated text to sway human decisions and also points out a new direction for evaluating generated text, i.e., leveraging the reactions and decisions of readers. We release our dataset to assist future research.
翻译:在后图灵时代,评估大型语言模型(LLM)涉及基于读者反应来评估生成文本,而非仅关注其与人类生成内容的不可区分性。本文探讨了LLM生成的文本如何影响读者的决策,重点关注业余和专业受众。我们的研究结果表明,GPT-4能够生成具有说服力的分析,影响业余人士和专业人士的决策。此外,我们从语法、说服力、逻辑连贯性和实用性等方面评估了生成文本。结果突显了通过受众反应进行的现实世界评估与当前生成模型常用的多维度评估器之间存在高度相关性。总体而言,本文揭示了使用生成文本影响人类决策的潜力与风险,并指出了评估生成文本的新方向,即利用读者的反应和决策。我们公开了数据集以助力未来研究。