Sequential problems are ubiquitous in AI, such as in reinforcement learning or natural language processing. State-of-the-art deep sequential models, like transformers, excel in these settings but fail to guarantee the satisfaction of constraints necessary for trustworthy deployment. In contrast, neurosymbolic AI (NeSy) provides a sound formalism to enforce constraints in deep probabilistic models but scales exponentially on sequential problems. To overcome these limitations, we introduce relational neurosymbolic Markov models (NeSy-MMs), a new class of end-to-end differentiable sequential models that integrate and provably satisfy relational logical constraints. We propose a strategy for inference and learning that scales on sequential settings, and that combines approximate Bayesian inference, automated reasoning, and gradient estimation. Our experiments show that NeSy-MMs can solve problems beyond the current state-of-the-art in neurosymbolic AI and still provide strong guarantees with respect to desired properties. Moreover, we show that our models are more interpretable and that constraints can be adapted at test time to out-of-distribution scenarios.
翻译:序列问题在人工智能中无处不在,例如在强化学习或自然语言处理领域。最先进的深度序列模型,如Transformer,在这些场景中表现出色,但无法保证满足可信部署所必需的约束条件。相比之下,神经符号人工智能(NeSy)提供了一种严谨的形式化方法,可在深度概率模型中强制执行约束,但在序列问题上存在指数级扩展的瓶颈。为克服这些限制,我们引入了关系型神经符号马尔可夫模型(NeSy-MMs),这是一类新型的端到端可微分序列模型,能够整合并可证明地满足关系逻辑约束。我们提出了一种适用于序列场景的推理与学习策略,该策略结合了近似贝叶斯推理、自动推理和梯度估计。实验表明,NeSy-MMs能够解决当前神经符号AI领域最先进技术无法处理的问题,同时仍能对期望属性提供强有力的保证。此外,我们证明了该模型具有更强的可解释性,且约束条件可在测试时针对分布外场景进行适应性调整。