Inductive logic programming (ILP) is a form of logical machine learning. Most ILP algorithms learn a single hypothesis from a single training run. Ensemble methods train an ILP algorithm multiple times to learn multiple hypotheses. In this paper, we train an ILP algorithm only once and save intermediate hypotheses. We then combine the hypotheses using a minimum description length weighting scheme. Our experiments on multiple benchmarks, including game playing and visual reasoning, show that our approach improves predictive accuracy by 4% with less than 1% computational overhead.
翻译:归纳逻辑编程(ILP)是一种逻辑机器学习形式。大多数ILP算法通过单次训练学习单个假设。集成方法则多次训练ILP算法以学习多个假设。本文中,我们仅对ILP算法进行一次训练,并保存中间假设。随后,我们采用最小描述长度加权方案对这些假设进行组合。在包括游戏博弈与视觉推理在内的多个基准测试中,我们的实验表明,该方法以低于1%的计算开销将预测准确率提升了4%。