Many inductive logic programming (ILP) methods are incapable of learning programs from probabilistic background knowledge, e.g. coming from sensory data or neural networks with probabilities. We propose Propper, which handles flawed and probabilistic background knowledge by extending ILP with a combination of neurosymbolic inference, a continuous criterion for hypothesis selection (BCE) and a relaxation of the hypothesis constrainer (NoisyCombo). For relational patterns in noisy images, Propper can learn programs from as few as 8 examples. It outperforms binary ILP and statistical models such as a Graph Neural Network.
翻译:许多归纳逻辑程序设计(ILP)方法无法从概率性背景知识(例如来自感知数据或带概率的神经网络)中学习程序。我们提出了Propper方法,该方法通过将神经符号推理、用于假设选择的连续准则(BCE)以及假设约束条件的松弛化(NoisyCombo)相结合来扩展ILP,从而处理有缺陷且具有概率性的背景知识。针对噪声图像中的关系模式,Propper仅需8个样本即可学习程序。其性能优于二元ILP及图神经网络等统计模型。