Neural networks (NNs) are pervasive across various domains but often lack interpretability. To address the growing need for explanations, logic-based approaches have been proposed to explain predictions made by NNs, offering correctness guarantees. However, scalability remains a concern in these methods. This paper proposes an approach leveraging domain slicing to facilitate explanation generation for NNs. By reducing the complexity of logical constraints through slicing, we decrease explanation time by up to 40\% less time, as indicated through comparative experiments. Our findings highlight the efficacy of domain slicing in enhancing explanation efficiency for NNs.
翻译:神经网络(NNs)在各领域应用广泛,但其可解释性往往不足。为满足日益增长的解释需求,基于逻辑的方法被提出用于解释神经网络的预测,并提供正确性保证。然而,这些方法的可扩展性仍存在局限。本文提出一种利用领域切片促进神经网络解释生成的方法。通过切片降低逻辑约束的复杂度,我们在对比实验中表明解释时间最多可减少40%。我们的研究结果突显了领域切片在提升神经网络解释效率方面的有效性。