Abductive Learning (ABL) integrates machine learning with logical reasoning in a loop: a learning model predicts symbolic concept labels from raw inputs, which are revised through abduction using domain knowledge and then fed back for retraining. However, due to the nondeterminism of abduction, the training process often suffers from instability, especially when the knowledge base is large and complex, resulting in a prohibitively large abduction space. While prior works focus on improving candidate selection within this space, they typically treat the knowledge base as a static black box. In this work, we propose Curriculum Abductive Learning (C-ABL), a method that explicitly leverages the internal structure of the knowledge base to address the ABL training challenges. C-ABL partitions the knowledge base into a sequence of sub-bases, progressively introduced during training. This reduces the abduction space throughout training and enables the model to incorporate logic in a stepwise, smooth way. Experiments across multiple tasks show that C-ABL outperforms previous ABL implementations, significantly improves training stability, convergence speed, and final accuracy, especially under complex knowledge setting.
翻译:溯因学习(ABL)将机器学习与逻辑推理整合在一个循环中:学习模型从原始输入中预测符号概念标签,这些标签通过领域知识进行溯因修正后反馈给模型进行再训练。然而,由于溯因过程的不确定性,训练过程常面临不稳定性问题,尤其在知识库规模庞大且复杂时,会导致溯因空间过大而难以处理。先前的研究主要关注在该空间内改进候选选择策略,但通常将知识库视为静态黑箱。本文提出课程式溯因学习(C-ABL),该方法显式利用知识库的内部结构以应对ABL训练中的挑战。C-ABL将知识库划分为一系列子知识库,并在训练过程中逐步引入。这有效缩减了训练全程的溯因空间,使模型能够以渐进、平滑的方式融入逻辑规则。在多个任务上的实验表明,C-ABL优于现有ABL实现,显著提升了训练稳定性、收敛速度与最终准确率,尤其在复杂知识场景下表现突出。