Logical fallacies are common in public communication and can mislead audiences; fallacious arguments may still appear convincing despite lacking soundness, because convincingness is inherently subjective. We present the first computational study of how emotional framing interacts with fallacies and convincingness, using large language models (LLMs) to systematically change emotional appeals in fallacious arguments. We benchmark eight LLMs on injecting emotional appeal into fallacious arguments while preserving their logical structures, then use the best models to generate stimuli for a human study. Our results show that LLM-driven emotional framing reduces human fallacy detection in F1 by 14.5% on average. Humans perform better in fallacy detection when perceiving enjoyment than fear or sadness, and these three emotions also correlate with significantly higher convincingness compared to neutral or other emotion states. Our work has implications for AI-driven emotional manipulation in the context of fallacious argumentation.
翻译:逻辑谬误在公共传播中普遍存在且可能误导受众;尽管缺乏合理性,谬误论证仍可能显得具有说服力,因为说服力本质上是主观的。我们首次通过计算研究方法探讨情绪框架如何与谬误及说服力相互作用,利用大语言模型(LLMs)系统性地改变谬误论证中的情感诉求。我们评估了八种LLM在保持逻辑结构不变的前提下向谬误论证注入情感诉求的能力,随后采用最优模型生成人类研究实验材料。研究结果表明,LLM驱动的情绪框架使人类谬误检测的F1值平均降低14.5%。当感知到愉悦情绪时,人类的谬误检测表现优于恐惧或悲伤情绪,且这三种情绪状态相较于中性或其他情绪状态,与显著更高的说服力存在相关性。本研究对谬误论证背景下AI驱动的情绪操纵具有重要启示。