Reconciling the tension between inductive learning and deductive reasoning in first-order relational domains is a longstanding challenge in AI. We study the problem of answering queries in a first-order relational probabilistic logic through a joint effort of learning and reasoning, without ever constructing an explicit model. Traditional lifted inference assumes access to a complete model and exploits symmetry to evaluate probabilistic queries; however, learning such models from partial, noisy observations is intractable in general. We reconcile these two challenges through implicit learning to reason and first-order relational probabilistic inference techniques. More specifically, we merge incomplete first-order axioms with independently sampled, partially observed examples into a bounded-degree fragment of the sum-of-squares (SOS) hierarchy in polynomial time. Our algorithm performs two lifts simultaneously: (i) grounding-lift, where renaming-equivalent ground moments share one variable, collapsing the domain of individuals; and (ii) world-lift, where all pseudo-models (partial world assignments) are enforced in parallel, producing a global bound that holds across all worlds consistent with the learned constraints. These innovations yield the first polynomial-time framework that implicitly learns a first-order probabilistic logic and performs lifted inference over both individuals and worlds.
翻译:在一阶关系领域中调和归纳学习与演绎推理之间的张力是人工智能领域长期存在的挑战。我们研究通过学习和推理的协同作用来回答一阶关系概率逻辑中的查询问题,而无需显式构建模型。传统的提升推理假设能够访问完整模型,并利用对称性来评估概率查询;然而,从部分、有噪声的观测中学习此类模型通常具有计算难解性。我们通过隐式学习推理和一阶关系概率推理技术来调和这两个挑战。具体而言,我们在多项式时间内将不完整的一阶公理与独立采样的部分观测样本合并为平方和(SOS)层次的有界度片段。我们的算法同时执行两种提升:(i)实例化提升——重命名等价的基础矩共享一个变量,从而压缩个体域;(ii)世界提升——所有伪模型(部分世界赋值)被并行强制执行,产生一个适用于所有符合学习约束的世界的全局界限。这些创新产生了首个多项式时间框架,能够隐式学习一阶概率逻辑,并在个体和世界两个维度上执行提升推理。