Progress in AI is hindered by the lack of a programming language with all the requisite features. Libraries like PyTorch and TensorFlow provide automatic differentiation and efficient GPU implementation, but are additions to Python, which was never intended for AI. Their lack of support for automated reasoning and knowledge acquisition has led to a long and costly series of hacky attempts to tack them on. On the other hand, AI languages like LISP and Prolog lack scalability and support for learning. This paper proposes tensor logic, a language that solves these problems by unifying neural and symbolic AI at a fundamental level. The sole construct in tensor logic is the tensor equation, based on the observation that logical rules and Einstein summation are essentially the same operation, and all else can be reduced to them. I show how to elegantly implement key forms of neural, symbolic and statistical AI in tensor logic, including transformers, formal reasoning, kernel machines and graphical models. Most importantly, tensor logic makes new directions possible, such as sound reasoning in embedding space. This combines the scalability and learnability of neural networks with the reliability and transparency of symbolic reasoning, and is potentially a basis for the wider adoption of AI.
翻译:人工智能的进展因缺乏具备所有必要特性的编程语言而受阻。诸如PyTorch和TensorFlow等库提供了自动微分和高效的GPU实现,但它们是Python的附加组件,而Python从未专为人工智能设计。这些库对自动化推理和知识获取支持的缺失,导致了一系列漫长而昂贵的临时性修补尝试。另一方面,LISP和Prolog等人工智能语言则缺乏可扩展性和对学习的支持。本文提出张量逻辑,这是一种通过在基础层面统一神经与符号人工智能来解决这些问题的语言。张量逻辑的唯一构造是张量方程,其基于以下观察:逻辑规则与爱因斯坦求和本质上是相同的操作,其他所有内容均可归约至此。我展示了如何优雅地在张量逻辑中实现神经、符号和统计人工智能的关键形式,包括Transformer、形式推理、核机器和图模型。最重要的是,张量逻辑为新的研究方向开辟了可能性,例如在嵌入空间中进行可靠推理。这将神经网络的可扩展性和可学习性与符号推理的可靠性和透明性相结合,并可能成为人工智能更广泛采用的基础。