Large language models (LLMs) play a key role in generating evidence-based and stylistic counter-arguments, yet their effectiveness in real-world applications has been underexplored. Previous research often neglects the balance between evidentiality and style, which are crucial for persuasive arguments. To address this, we evaluated the effectiveness of stylized evidence-based counter-argument generation in Counterfire, a new dataset of 38,000 counter-arguments generated by revising counter-arguments to Reddit's ChangeMyView community to follow different discursive styles. We evaluated generic and stylized counter-arguments from basic and fine-tuned models such as GPT-3.5, PaLM-2, and Koala-13B, as well as newer models (GPT-4o, Claude Haiku, LLaMA-3.1) focusing on rhetorical quality and persuasiveness. Our findings reveal that humans prefer stylized counter-arguments over the original outputs, with GPT-3.5 Turbo performing well, though still not reaching human standards of rhetorical quality nor persuasiveness. Additionally, our work created a novel argument triplets dataset for studying style control, with human preference labels that provide insights into the tradeoffs between evidence integration and argument quality.
翻译:大语言模型在生成基于证据且具有风格化的反驳论证中发挥着关键作用,但其在实际应用中的有效性尚未得到充分探究。先前研究常忽视证据性与风格之间的平衡,而这两者对构建具有说服力的论证至关重要。为此,我们在Counterfire数据集上评估了风格化证据型反驳论证生成的有效性——该数据集包含38,000条通过修订Reddit社区ChangeMyView中的反驳论证以遵循不同论述风格而生成的样本。我们评估了基础模型与微调模型(如GPT-3.5、PaLM-2、Koala-13B)以及新近模型(GPT-4o、Claude Haiku、LLaMA-3.1)生成的通用型与风格化反驳论证,重点关注其修辞质量与说服力。研究发现:相较于原始输出,人类更偏好风格化反驳论证;其中GPT-3.5 Turbo表现优异,但其修辞质量与说服力仍未能达到人类标准。此外,本研究构建了用于风格控制研究的新型论证三元组数据集,其中包含的人类偏好标签为理解证据整合与论证质量间的权衡关系提供了重要洞见。