We propose Modal Logical Neural Networks (MLNNs), a neurosymbolic framework that integrates deep learning with the formal semantics of modal logic, enabling reasoning about necessity and possibility. Drawing on Kripke semantics, we introduce specialized neurons for the modal operators $\Box$ and $\Diamond$ that operate over a set of possible worlds, enabling the framework to act as a differentiable ``logical guardrail.'' The architecture is highly flexible: the accessibility relation between worlds can either be fixed by the user to enforce known rules or, as an inductive feature, be parameterized by a neural network. This allows the model to optionally learn the relational structure of a logical system from data while simultaneously performing deductive reasoning within that structure. This versatile construction is designed for flexibility. The entire framework is differentiable from end to end, with learning driven by minimizing a logical contradiction loss. This not only makes the system resilient to inconsistent knowledge but also enables it to learn nonlinear relationships that can help define the logic of a problem space. We illustrate MLNNs on four case studies: grammatical guardrailing, multi-agent epistemic trust, detecting constructive deception in natural language negotiation, and combinatorial constraint satisfaction in Sudoku. These experiments demonstrate how enforcing or learning accessibility can increase logical consistency and interpretability without changing the underlying task architecture.
翻译:我们提出模态逻辑神经网络(MLNNs),一种将深度学习与模态逻辑形式语义相结合的神经符号框架,能够对必然性与可能性进行推理。基于克里普克语义学,我们引入了针对模态算子$\Box$和$\Diamond$的专用神经元,这些神经元在一组可能世界上进行操作,使该框架能够作为可微分的“逻辑护栏”。该架构具有高度灵活性:世界之间的可达关系既可由用户固定以强制执行已知规则,也可作为归纳特征通过神经网络进行参数化。这使得模型能够选择性地从数据中学习逻辑系统的关系结构,同时在该结构内执行演绎推理。这种多功能设计旨在实现灵活性。整个框架端到端可微分,学习过程通过最小化逻辑矛盾损失来驱动。这不仅使系统对不一致知识具有鲁棒性,还使其能够学习有助于定义问题空间逻辑的非线性关系。我们在四个案例研究中展示MLNNs的应用:语法护栏、多智能体认知信任、自然语言谈判中建设性欺骗的检测,以及数独的组合约束满足。这些实验表明,强制执行或学习可达关系能够在不改变底层任务架构的情况下,提升逻辑一致性与可解释性。