Large Language Models (LLMs) have achieved significant progress in language understanding and reasoning. Evaluating and analyzing their logical reasoning abilities has therefore become essential. However, existing datasets and benchmarks are often limited to overly simplistic, unnatural, or contextually constrained examples. In response to the growing demand, we introduce SmartyPat-Bench, a challenging, naturally expressed, and systematically labeled benchmark derived from real-world high-quality Reddit posts containing subtle logical fallacies. Unlike existing datasets and benchmarks, it provides more detailed annotations of logical fallacies and features more diverse data. To further scale up the study and address the limitations of manual data collection and labeling - such as fallacy-type imbalance and labor-intensive annotation - we introduce SmartyPat, an automated framework powered by logic programming-based oracles. SmartyPat utilizes Prolog rules to systematically generate logically fallacious statements, which are then refined into fluent natural-language sentences by LLMs, ensuring precise fallacy representation. Extensive evaluation demonstrates that SmartyPat produces fallacies comparable in subtlety and quality to human-generated content and significantly outperforms baseline methods. Finally, experiments reveal nuanced insights into LLM capabilities, highlighting that while excessive reasoning steps hinder fallacy detection accuracy, structured reasoning enhances fallacy categorization performance.
翻译:大语言模型(LLMs)在语言理解与推理方面取得了显著进展,因此评估与分析其逻辑推理能力变得至关重要。然而,现有数据集与基准测试往往局限于过于简化、非自然或语境受限的示例。为应对日益增长的需求,我们提出了SmartyPat-Bench——一个具有挑战性、表达自然且系统标注的基准测试,其源自包含微妙逻辑谬误的真实世界高质量Reddit帖子。与现有数据集和基准测试不同,该基准提供了更详细的逻辑谬误标注,并具有更丰富多样的数据特征。为进一步扩展研究规模并解决人工数据收集与标注的局限性(如谬误类型不平衡和劳动密集型标注),我们提出了SmartyPat——一个由基于逻辑编程的预言机驱动的自动化框架。SmartyPat利用Prolog规则系统生成逻辑谬误陈述,随后通过LLMs将其精炼为流畅的自然语言句子,确保谬误表征的精确性。广泛评估表明,SmartyPat生成的谬误在微妙性与质量上可与人类生成内容相媲美,并显著优于基线方法。最终,实验揭示了对LLM能力的细致洞察:虽然过多的推理步骤会降低谬误检测准确率,但结构化推理能提升谬误分类性能。