This paper introduces a theoretical framework that connects neural network linear layers with the Mahalanobis distance, offering a new perspective on neural network interpretability. While previous studies have explored activation functions primarily for performance optimization, our work interprets these functions through statistical distance measures, a less explored area in neural network research. By establishing this connection, we provide a foundation for developing more interpretable neural network models, which is crucial for applications requiring transparency. Although this work is theoretical and does not include empirical data, the proposed distance-based interpretation has the potential to enhance model robustness, improve generalization, and provide more intuitive explanations of neural network decisions.
翻译:本文提出了一种将神经网络线性层与马氏距离相联系的理论框架,为神经网络可解释性研究提供了新视角。以往研究主要关注激活函数在性能优化中的作用,而本文则通过统计距离度量(这一神经网络研究中较少探索的领域)来阐释这些函数。通过建立这种联系,我们为开发更具可解释性的神经网络模型奠定了基础,这对需要透明度的应用场景至关重要。尽管本研究属于理论探讨且未包含实证数据,但所提出的基于距离的解释方法有望增强模型鲁棒性、改进泛化能力,并为神经网络决策提供更直观的解释。