Detecting persuasion in argumentative text is a challenging task with important implications for understanding human communication. This work investigates the role of persuasion strategies - such as Attack on reputation, Distraction, and Manipulative wording - in determining the persuasiveness of a text. We conduct experiments on three annotated argument datasets: Winning Arguments (built from the Change My View subreddit), Anthropic/Persuasion, and Persuasion for Good. Our approach leverages large language models (LLMs) with a Multi-Strategy Persuasion Scoring approach that guides reasoning over six persuasion strategies. Results show that strategy-guided reasoning improves the prediction of persuasiveness. To better understand the influence of content, we organize the Winning Argument dataset into broad discussion topics and analyze performance across them. We publicly release this topic-annotated version of the dataset to facilitate future research. Overall, our methodology demonstrates the value of structured, strategy-aware prompting for enhancing interpretability and robustness in argument quality assessment.
翻译:检测论辩文本中的说服力是一项具有重要意义的挑战性任务,对于理解人类沟通至关重要。本研究探讨了说服策略——例如声誉攻击、转移注意力和操纵性措辞——在决定文本说服力中的作用。我们在三个已标注的论辩数据集上进行了实验:获胜论点数据集(构建自Change My View子版块)、Anthropic/Persuasion数据集以及Persuasion for Good数据集。我们的方法利用大语言模型,结合一种多策略说服力评分方法,引导模型对六种说服策略进行推理。结果表明,策略引导的推理提高了说服力预测的准确性。为了更好地理解内容的影响,我们将获胜论点数据集按广泛的讨论主题进行组织,并分析了不同主题上的性能表现。我们公开发布了这个经过主题标注的数据集版本,以促进未来研究。总体而言,我们的方法证明了结构化、策略感知的提示在增强论辩质量评估的可解释性和鲁棒性方面的价值。