We propose Fluid Logic, a paradigm in which modal logical reasoning, temporal, epistemic, doxastic, deontic, is lifted from discrete Kripke structures to continuous manifolds via Neural Stochastic Differential Equations (Neural SDEs). Each type of modal operator is backed by a dedicated Neural SDE, and nested formulas compose these SDEs in a single differentiable graph. A key instantiation is Logic-Informed Neural Networks (LINNs): analogous to Physics-Informed Neural Networks (PINNs), LINNs embed modal logical formulas such as ($\Box$ bounded) and ($\Diamond$ visits\_lobe) directly into the training loss, guiding neural networks to produce solutions that are structurally consistent with prescribed logical properties, without requiring knowledge of the governing equations. The resulting framework, Continuous Modal Logical Neural Networks (CMLNNs), yields several key properties: (i) stochastic diffusion prevents quantifier collapse ($\Box$ and $\Diamond$ differ), unlike deterministic ODEs; (ii) modal operators are entropic risk measures, sound with respect to risk-based semantics with explicit Monte Carlo concentration guarantees; (iii)SDE-induced accessibility provides structural correspondence with classical modal axioms; (iv) parameterizing accessibility through dynamics reduces memory from quadratic in world count to linear in parameters. Three case studies demonstrate that Fluid Logic and LINNs can guide neural networks to produce consistent solutions across diverse domains: epistemic/doxastic logic (multi-robot hallucination detection), temporal logic (recovering the Lorenz attractor geometry from logical constraints alone), and deontic logic (learning safe confinement dynamics from a logical specification).
翻译:我们提出流体逻辑(Fluid Logic)这一范式,其中模态逻辑推理(包括时态、认知、信念、道义逻辑)通过神经随机微分方程(Neural SDEs)从离散的克里普克结构提升至连续流形。每种模态算子均由一个专用的神经随机微分方程支持,嵌套公式将这些随机微分方程组合在单个可微分图中。一个关键实例是逻辑信息神经网络(LINNs):类似于物理信息神经网络(PINNs),LINNs将模态逻辑公式(如($\Box$ bounded)和($\Diamond$ visits\_lobe))直接嵌入训练损失中,引导神经网络生成与预设逻辑属性结构一致的解,而无需了解控制方程。由此产生的框架——连续模态逻辑神经网络(CMLNNs)——具有以下几个关键特性:(i)随机扩散防止了量词坍缩($\Box$ 和 $\Diamond$ 不同),这与确定性常微分方程不同;(ii)模态算子是熵风险度量,与基于风险语义保持一致,并具有明确的蒙特卡洛集中性保证;(iii)随机微分方程诱导的可达性提供了与经典模态公理的结构对应;(iv)通过动力学参数化可达性,将内存需求从世界数量的二次方降低到参数的线性。三个案例研究表明,流体逻辑和LINNs能够引导神经网络在多个领域生成一致的解:认知/信念逻辑(多机器人幻觉检测)、时态逻辑(仅从逻辑约束恢复洛伦兹吸引子几何结构)以及道义逻辑(从逻辑规约学习安全约束动力学)。