In previous work (Coppola, 2024) we introduced the Quantified Boolean Bayesian Network (QBBN), a logical graphical model that implements the forward fragment of natural deduction (Prawitz, 1965) as a probabilistic factor graph. That work left two gaps: no negation/backward reasoning, and no parser for natural language. This paper addresses both gaps across inference, semantics, and syntax. For inference, we extend the QBBN with NEG factors enforcing P(x) + P(neg x) = 1, enabling contrapositive reasoning (modus tollens) via backward lambda messages, completing Prawitz's simple elimination rules. The engine handles 44/44 test cases spanning 22 reasoning patterns. For semantics, we present a typed logical language with role-labeled predicates, modal quantifiers, and three tiers of expressiveness following Prawitz: first-order quantification, propositions as arguments, and predicate quantification via lambda abstraction. For syntax, we present a typed slot grammar that deterministically compiles sentences to logical form (33/33 correct, zero ambiguity). LLMs handle disambiguation (95% PP attachment accuracy) but cannot produce structured parses directly (12.4% UAS), confirming grammars are necessary. The architecture: LLM preprocesses, grammar parses, LLM reranks, QBBN infers. We argue this reconciles formal semantics with Sutton's "bitter lesson" (2019): LLMs eliminate the annotation bottleneck that killed formal NLP, serving as annotator while the QBBN serves as verifier. Code: https://github.com/gregorycoppola/world
翻译:在先前的工作中(Coppola, 2024),我们引入了量化布尔贝叶斯网络(QBBN),这是一种将自然演绎的前向片段(Prawitz, 1965)实现为概率因子图的逻辑图模型。该工作遗留了两个缺口:缺乏否定/后向推理,以及缺乏自然语言解析器。本文在推理、语义和句法三个层面同时解决了这两个问题。在推理方面,我们通过引入强制约束 P(x) + P(neg x) = 1 的 NEG 因子来扩展 QBBN,使其能够通过后向 lambda 消息实现逆否推理(拒取式),从而完善了 Prawitz 的简单消去规则。该推理引擎在涵盖 22 种推理模式的 44 个测试案例中全部通过(44/44)。在语义方面,我们提出了一种带类型标注的逻辑语言,它包含角色标记的谓词、模态量词,并遵循 Prawitz 的三个表达层次:一阶量化、命题作为参数,以及通过 lambda 抽象实现的谓词量化。在句法方面,我们提出了一种带类型的槽位语法,它能确定性地将句子编译为逻辑形式(33/33 正确,零歧义)。大语言模型(LLM)负责处理歧义(介词短语附着准确率达 95%),但无法直接生成结构化解析(无标记依存准确率仅为 12.4%),这证实了语法规则的必要性。整体架构为:LLM 预处理,语法解析,LLM 重排序,QBBN 推理。我们认为这调和了形式语义学与 Sutton 的“苦涩教训”(2019):LLMs 消除了曾扼杀形式 NLP 的标注瓶颈,充当了标注器的角色,而 QBBN 则充当了验证器的角色。代码:https://github.com/gregorycoppola/world