Expressive querying of machine learning models - viewed as a form of intentional data - enables their verification and interpretation using declarative languages, thereby making learned representations of data more accessible. Motivated by the querying of feedforward neural networks, we investigate logics for weighted structures. In the absence of a bound on neural network depth, such logics must incorporate recursion; thereto we revisit the functional fixpoint mechanism proposed by Grädel and Gurevich. We adopt it in a Datalog-like syntax; we extend normal forms for fixpoint logics to weighted structures; and show an equivalent "loose" fixpoint mechanism that allows values of inductively defined weight functions to be overwritten. We propose a "scalar" restriction of functional fixpoint logic, of polynomial-time data complexity, and show it can express all PTIME model-agnostic queries over reduced networks with polynomially bounded weights. In contrast, we show that very simple model-agnostic queries are already NP-complete. Finally, we consider transformations of weighted structures by iterated transductions.
翻译:对机器学习模型进行表达性查询——将其视为一种意图数据形式——能够通过声明式语言实现模型的验证与解释,从而使数据的学习表示更易于访问。受前馈神经网络查询的启发,我们研究了面向加权结构的逻辑系统。在神经网络深度无界的情况下,此类逻辑必须引入递归机制;为此我们重新审视了格拉德尔与古列维奇提出的函数不动点机制。我们采用类似Datalog的语法实现该机制;将不动点逻辑的范式扩展至加权结构;并提出一种允许覆盖归纳定义权函数值的“宽松”不动点机制。我们进一步提出具有多项式时间数据复杂度的函数不动点逻辑“标量”限制形式,证明其可在权重多项式有界的简化网络上表达所有PTIME模型无关查询。与之相对,我们证明即使非常简单的模型无关查询也已是NP完全问题。最后,我们探讨了通过迭代转换实现加权结构变换的方法。