Traditional neural networks have an impressive classification performance, but what they learn cannot be inspected, verified or extracted. Neural Logic Networks on the other hand have an interpretable structure that enables them to learn a logical mechanism relating the inputs and outputs with AND and OR operations. We generalize these networks with NOT operations and biases that take into account unobserved data and develop a rigorous logical and probabilistic modeling in terms of concept combinations to motivate their use. We also propose a novel factorized IF-THEN rule structure for the model as well as a modified learning algorithm. Our method improves the state-of-the-art in Boolean networks discovery and is able to learn relevant, interpretable rules in tabular classification, notably on examples from the medical and industrial fields where interpretability has tangible value.
翻译:传统神经网络在分类任务上表现出色,但其学习机制无法被检查、验证或提取。相比之下,神经逻辑网络具有可解释的结构,能够通过逻辑运算(AND与OR)学习输入与输出之间的逻辑关系。本文通过引入NOT运算及考虑未观测数据的偏置项,对这些网络进行了泛化,并建立了基于概念组合的严格逻辑与概率建模框架以论证其适用性。我们进一步提出了一种新颖的因子化IF-THEN规则结构及改进的学习算法。该方法在布尔网络发现领域达到了最先进的性能,能够在表格分类中学习到相关且可解释的规则,尤其在医疗与工业等具有实际应用价值的可解释性需求场景中表现突出。