The unification of neural and symbolic approaches to artificial intelligence remains a central open challenge. In this work, we introduce a tensor network formalism, which captures sparsity principles originating in the different approaches in tensor decompositions. In particular, we describe a basis encoding scheme for functions and model neural decompositions as tensor decompositions. The proposed formalism can be applied to represent logical formulas and probability distributions as structured tensor decompositions. This unified treatment identifies tensor network contractions as a fundamental inference class and formulates efficiently scaling reasoning algorithms, originating from probability theory and propositional logic, as contraction message passing schemes. The framework enables the definition and training of hybrid logical and probabilistic models, which we call Hybrid Logic Network. The theoretical concepts are accompanied by the python library tnreason, which enables the implementation and practical use of the proposed architectures.
翻译:神经方法与符号方法在人工智能领域的统一仍然是一个核心的开放挑战。本文引入了一种张量网络形式体系,该体系在张量分解中捕捉了源自不同方法的稀疏性原理。具体而言,我们描述了一种用于函数的基编码方案,并将神经分解建模为张量分解。所提出的形式体系可用于将逻辑公式和概率分布表示为结构化的张量分解。这种统一处理将张量网络收缩确立为一类基本推理操作,并将源自概率论和命题逻辑的高效可扩展推理算法形式化为收缩消息传递方案。该框架支持定义和训练逻辑与概率混合模型,我们称之为混合逻辑网络。理论概念辅以Python库tnreason,该库实现了所提出的架构并支持其实际应用。