Similarity measures are widely used to interpret the representational geometries used by neural networks to solve tasks. Yet, because existing methods compare the extrinsic geometry of representations in state space, rather than their intrinsic geometry, they may fail to capture subtle yet crucial distinctions between fundamentally different neural network solutions. Here, we introduce metric similarity analysis (MSA), a novel method which leverages tools from Riemannian geometry to compare the intrinsic geometry of neural representations under the manifold hypothesis. We show that MSA can be used to i) disentangle features of neural computations in deep networks with different learning regimes, ii) compare nonlinear dynamics, and iii) investigate diffusion models. Hence, we introduce a mathematically grounded and broadly applicable framework to understand the mechanisms behind neural computations by comparing their intrinsic geometries.
翻译:相似性度量被广泛用于解释神经网络解决任务时所采用的表征几何结构。然而,由于现有方法比较的是状态空间中表征的外在几何特征而非内在几何结构,因此可能无法捕捉到本质上不同的神经网络解决方案之间微妙但关键的区别。本文提出了一种新型方法——度量相似性分析(MSA),该方法利用黎曼几何工具在流形假设下比较神经表征的内在几何结构。我们证明,MSA可用于:i) 解耦不同学习机制下深度网络神经计算的特征,ii) 比较非线性动力学,以及iii) 研究扩散模型。由此,我们建立了一个具有数学基础且广泛适用的框架,通过比较内在几何结构来理解神经计算背后的机制。