Machine learning (ML) techniques play a pivotal role in high-stakes domains such as healthcare, where accurate predictions can greatly enhance decision-making. However, most high-performing methods such as neural networks and ensemble methods are often opaque, limiting trust and broader adoption. In parallel, symbolic methods like Answer Set Programming (ASP) offer the possibility of interpretable logical rules but do not always match the predictive power of ML models. This paper proposes a hybrid approach that integrates ASP-derived rules from the FOLD-R++ algorithm with black-box ML classifiers to selectively correct uncertain predictions and provide human-readable explanations. Experiments on five medical reveal statistically significant performance gains in accuracy and F1 score. This study underscores the potential of combining symbolic reasoning with conventional ML to achieve high interpretability without sacrificing accuracy
翻译:机器学习(ML)技术在高风险领域(如医疗保健)中发挥着关键作用,其精准预测能显著提升决策质量。然而,神经网络与集成方法等高性能方法往往缺乏透明度,制约了可信度与广泛采用。与此同时,回答集编程(ASP)等符号方法虽能提供可解释的逻辑规则,但其预测能力未必总能匹配ML模型。本文提出一种混合方法,将FOLD-R++算法从ASP推导的规则与黑盒ML分类器相结合,以选择性修正不确定预测并提供人类可读的解释。在五项医疗数据集上的实验表明,该方法在准确率与F1分数上均取得统计学显著的性能提升。本研究印证了符号推理与传统ML相结合,可在保持高准确率的同时实现强可解释性的潜力。