Integrating logical knowledge into deep neural network training is still a hard challenge, especially for sequential or temporally extended domains involving subsymbolic observations. To address this problem, we propose DeepDFA, a neurosymbolic framework that integrates high-level temporal logic - expressed as Deterministic Finite Automata (DFA) or Moore Machines - into neural architectures. DeepDFA models temporal rules as continuous, differentiable layers, enabling symbolic knowledge injection into subsymbolic domains. We demonstrate how DeepDFA can be used in two key settings: (i) static image sequence classification, and (ii) policy learning in interactive non-Markovian environments. Across extensive experiments, DeepDFA outperforms traditional deep learning models (e.g., LSTMs, GRUs, Transformers) and novel neuro-symbolic systems, achieving state-of-the-art results in temporal knowledge integration. These results highlight the potential of DeepDFA to bridge subsymbolic learning and symbolic reasoning in sequential tasks.
翻译:将逻辑知识整合到深度神经网络训练中仍然是一个严峻挑战,尤其是在涉及次符号观测的序列或时序扩展领域。为解决此问题,我们提出了DeepDFA,一种将高级时序逻辑——以确定性有限自动机(DFA)或摩尔机形式表达——整合到神经架构中的神经符号框架。DeepDFA将时序规则建模为连续、可微的层,从而使得符号知识能够注入次符号领域。我们展示了DeepDFA如何在两个关键场景中应用:(i)静态图像序列分类,以及(ii)交互式非马尔可夫环境中的策略学习。在大量实验中,DeepDFA超越了传统深度学习模型(如LSTM、GRU、Transformer)和新型神经符号系统,在时序知识整合方面取得了最先进的结果。这些结果凸显了DeepDFA在序列任务中桥接次符号学习与符号推理的潜力。