Understanding output variance is critical in modeling nonlinear dynamic systems, as it reflects the system's sensitivity to input variations and feature interactions. This work presents a methodology for dynamically determining relevance scores in black-box models while ensuring interpretability through an embedded decision module. This interpretable module, integrated into the first layer of the model, employs the Fisher Information Matrix (FIM) and logistic regression to compute relevance scores, interpreted as the probabilities of input neurons being active based on their contribution to the output variance. The proposed method leverages a gradient-based framework to uncover the importance of variance-driven features, capturing both individual contributions and complex feature interactions. These relevance scores are applied through element-wise transformations of the inputs, enabling the black-box model to prioritize features dynamically based on their impact on system output. This approach effectively bridges interpretability with the intricate modeling of nonlinear dynamics and time-dependent interactions. Simulation results demonstrate the method's ability to infer feature interactions while achieving superior performance in feature relevance compared to existing techniques. The practical utility of this approach is showcased through its application to an industrial pH neutralization process, where critical system dynamics are uncovered.
翻译:理解输出方差在非线性动态系统建模中至关重要,因为它反映了系统对输入变化和特征交互的敏感性。本研究提出了一种在保持可解释性的前提下,为黑盒模型动态确定相关性评分的方法。该可解释模块嵌入模型首层,利用Fisher信息矩阵(FIM)和逻辑回归计算相关性评分,该评分可解释为输入神经元基于其对输出方差贡献的激活概率。所提方法采用基于梯度的框架揭示方差驱动特征的重要性,既能捕捉个体贡献又能捕获复杂特征交互。这些相关性评分通过输入元素的逐元变换得以应用,使黑盒模型能够根据特征对系统输出的影响动态调整其优先级。该方法有效弥合了可解释性与非线性动态及时间依赖交互的复杂建模之间的鸿沟。仿真结果表明,该方法在推断特征交互的同时,在特征相关性评估方面优于现有技术。通过将其应用于工业pH中和过程(其中关键系统动态得以揭示),展示了该方法的实用价值。