Neural networks can accurately forecast complex dynamical systems, yet how they internally represent underlying latent geometry remains poorly understood. We study neural forecasters through the lens of representational alignment, introducing anchor-based, geometry-agnostic relative embeddings that remove rotational and scaling ambiguities in latent spaces. Applying this framework across seven canonical dynamical systems - ranging from periodic to chaotic - we reveal reproducible family-level structure: multilayer perceptrons align with other MLPs, recurrent networks with RNNs, while transformers and echo-state networks achieve strong forecasts despite weaker alignment. Alignment generally correlates with forecasting accuracy, yet high accuracy can coexist with low alignment. Relative geometry thus provides a simple, reproducible foundation for comparing how model families internalize and represent dynamical structure.
翻译:神经网络能够准确预测复杂动力系统,但其内部如何表示潜在几何结构仍鲜为人知。我们通过表征对齐的视角研究神经预测器,引入基于锚点、几何无关的相对嵌入方法,以消除潜在空间中的旋转和尺度模糊性。在七个典型动力系统(从周期系统到混沌系统)中应用该框架,我们揭示了可复现的模型族层面结构:多层感知机与其它MLP对齐,循环网络与RNN对齐,而Transformer和回声状态网络虽对齐较弱却实现了强预测能力。对齐性通常与预测准确性相关,但高准确性亦可与低对齐性共存。因此,相对几何学为比较不同模型族如何内化与表征动力结构提供了简洁且可复现的基础框架。