Neural probabilistic logic systems follow the neuro-symbolic (NeSy) paradigm by combining the perceptive and learning capabilities of neural networks with the robustness of probabilistic logic. Learning corresponds to likelihood optimization of the neural networks. However, to obtain the likelihood exactly, expensive probabilistic logic inference is required. To scale learning to more complex systems, we therefore propose to instead optimize a sampling based objective. We prove that the objective has a bounded error with respect to the likelihood, which vanishes when increasing the sample count. Furthermore, the error vanishes faster by exploiting a new concept of sample diversity. We then develop the EXPLAIN, AGREE, LEARN (EXAL) method that uses this objective. EXPLAIN samples explanations for the data. AGREE reweighs each explanation in concordance with the neural component. LEARN uses the reweighed explanations as a signal for learning. In contrast to previous NeSy methods, EXAL can scale to larger problem sizes while retaining theoretical guarantees on the error. Experimentally, our theoretical claims are verified and EXAL outperforms recent NeSy methods when scaling up the MNIST addition and Warcraft pathfinding problems.
翻译:神经概率逻辑系统遵循神经符号(NeSy)范式,将神经网络的感知与学习能力与概率逻辑的鲁棒性相结合。其学习过程对应于神经网络的似然优化。然而,为精确计算似然,需要进行昂贵的概率逻辑推理。为使学习能够扩展到更复杂的系统,我们提出转而优化一种基于采样的目标函数。我们证明该目标函数相对于似然的误差是有界的,且随着样本数量的增加而趋近于零。此外,通过利用一种新的样本多样性概念,误差能以更快的速度收敛至零。基于此目标函数,我们提出了解释、共识、学习(EXAL)方法。解释(EXPLAIN)步骤为数据采样生成解释;共识(AGREE)步骤根据神经组件的输出对每个解释进行重新加权;学习(LEARN)步骤则将这些加权后的解释作为学习信号。与以往的NeSy方法相比,EXAL能够在保持理论误差界的前提下,扩展到更大规模的问题。实验验证了我们的理论主张,并在扩展的MNIST加法与Warcraft寻路问题上,EXAL的表现优于近期的NeSy方法。