Synthesizing large logic programs through symbolic Inductive Logic Programming (ILP) typically requires intermediate definitions. However, cluttering the hypothesis space with intensional predicates typically degrades performance. In contrast, gradient descent provides an efficient way to find solutions within such high-dimensional spaces. Neuro-symbolic ILP approaches have not fully exploited this so far. We propose extending the {\delta}ILP approach to inductive synthesis with large-scale predicate invention, thus allowing us to exploit the efficacy of high-dimensional gradient descent. We show that large-scale predicate invention benefits differentiable inductive synthesis through gradient descent and allows one to learn solutions for tasks beyond the capabilities of existing neuro-symbolic ILP systems. Furthermore, we achieve these results without specifying the precise structure of the solution within the language bias.
翻译:通过符号归纳逻辑程序设计(ILP)合成大型逻辑程序通常需要中间定义。然而,在假设空间中引入内涵谓词往往会导致性能下降。相比之下,梯度下降为在此类高维空间中寻找解提供了一种高效方法。迄今为止,神经符号ILP方法尚未充分利用这一优势。我们提出将{\delta}ILP方法扩展至支持大规模谓词发明的归纳合成,从而能够利用高维梯度下降的高效性。我们证明,大规模谓词发明通过梯度下降使可微分归纳合成受益,并能够学习超越现有神经符号ILP系统能力范围的任务解决方案。此外,我们在无需于语言偏置中指定解的具体结构的情况下实现了这些结果。