Large language models (LLMs) play a key role in generating evidence-based and stylistic counter-arguments, yet their effectiveness in real-world applications has been underexplored. Previous research often neglects the balance between evidentiality and style, which are crucial for persuasive arguments. To address this, we evaluated the effectiveness of stylized evidence-based counter-argument generation in Counterfire, a new dataset of 38,000 counter-arguments generated by revising counter-arguments to Reddit's ChangeMyView community to follow different discursive styles. We evaluated generic and stylized counter-arguments from basic and fine-tuned models such as GPT-3.5, PaLM-2, and Koala-13B, as well as newer models (GPT-4o, Claude Haiku, LLaMA-3.1) focusing on rhetorical quality and persuasiveness. Our findings reveals that humans prefer stylized counter-arguments over the original outputs, with GPT-3.5 Turbo performing well, though still not reaching human standards of rhetorical quality nor persuasiveness indicating a persisting style-evidence tradeoff in counter-argument generation by LLMs. We conclude with an examination of ethical considerations in LLM persuasion research, addressing potential risks of deceptive practices and the need for transparent deployment methodologies to safeguard against misuse in public discourse. The code and dataset are available at https://github.com/Preetika764/Style_control/.
翻译:大语言模型(LLMs)在生成基于证据且具有风格化的反驳论证中发挥着关键作用,但其在实际应用中的有效性尚未得到充分探索。先前的研究往往忽视了证据性与风格之间的平衡,而这两者对于有说服力的论证至关重要。为解决这一问题,我们评估了风格化、基于证据的反驳论证生成在Counterfire数据集中的有效性。该数据集包含38,000条反驳论证,通过对Reddit的ChangeMyView社区中的反驳论证进行修订以遵循不同的话语风格而生成。我们评估了来自基础模型和微调模型(如GPT-3.5、PaLM-2和Koala-13B)以及较新模型(GPT-4o、Claude Haiku、LLaMA-3.1)生成的通用和风格化反驳论证,重点关注其修辞质量和说服力。我们的研究结果表明,相较于原始输出,人类更偏好风格化的反驳论证,其中GPT-3.5 Turbo表现良好,但仍未达到人类在修辞质量和说服力方面的标准,这表明LLMs在生成反驳论证时仍存在持续的"风格-证据"权衡问题。最后,我们探讨了LLM说服力研究中的伦理考量,指出了欺骗性实践可能带来的风险,以及需要采用透明的部署方法以防止在公共讨论中被滥用。相关代码和数据集可在https://github.com/Preetika764/Style_control/获取。