Fuzzy systems show strong potential in explainable AI due to their rule-based architecture and linguistic variables. Existing approaches navigate the accuracy-explainability trade-off either through evolutionary multi-objective optimization (MOO), which is computationally expensive, or gradient-based scalarization, which cannot recover non-convex Pareto regions. We propose X-ANFIS, an alternating bi-objective gradient-based optimization scheme for explainable adaptive neuro-fuzzy inference systems. Cauchy membership functions are used for stable training under semantically controlled initializations, and a differentiable explainability objective is introduced and decoupled from the performance objective through alternating gradient passes. Validated in approximately 5,000 experiments on nine UCI regression datasets, X-ANFIS consistently achieves target distinguishability while maintaining competitive predictive accuracy, recovering solutions beyond the convex hull of the MOO Pareto front.
翻译:模糊系统因其基于规则的架构和语言变量,在可解释人工智能领域展现出巨大潜力。现有方法通过计算代价高昂的进化多目标优化,或无法恢复非凸帕累托区域的基于梯度的标量化方法,来权衡精度与可解释性。我们提出X-ANFIS——一种用于可解释自适应神经模糊推理系统的交替双目标梯度优化方案。该方法采用柯西隶属函数在语义控制初始化下实现稳定训练,并通过交替梯度传递引入可微分的可解释性目标,使其与性能目标解耦。在九个UCI回归数据集上约5000次实验验证表明,X-ANFIS在保持竞争力预测精度的同时,始终实现目标可区分性,并能获得超越多目标优化帕累托前沿凸包的解。