Intelligent systems across physics, language and perception often exhibit factorisable structure, yet are typically modelled by monolithic neural architectures that do not explicitly exploit this structure. The separable neural architecture (SNA) addresses this by formalising a representational class that unifies additive, quadratic and tensor-decomposed neural models. By constraining interaction order and tensor rank, SNAs impose a structural inductive bias that factorises high-dimensional mappings into low-arity components. Separability need not be a property of the system itself: it often emerges in the coordinates or representations through which the system is expressed. Crucially, this coordinate-aware formulation reveals a structural analogy between chaotic spatiotemporal dynamics and linguistic autoregression. By treating continuous physical states as smooth, separable embeddings, SNAs enable distributional modelling of chaotic systems. This approach mitigates the nonphysical drift characteristics of deterministic operators whilst remaining applicable to discrete sequences. The compositional versatility of this approach is demonstrated across four domains: autonomous waypoint navigation via reinforcement learning, inverse generation of multifunctional microstructures, distributional modelling of turbulent flow and neural language modelling. These results establish the separable neural architecture as a domain-agnostic primitive for predictive and generative intelligence, capable of unifying both deterministic and distributional representations.
翻译:跨物理学、语言与感知的智能系统常呈现可因子化结构,但现有建模通常采用单一化神经架构,未能显式利用该结构。可分离神经架构通过形式化一个统一加性、二次及张量分解神经模型的表示类别来解决此问题。通过约束交互阶数与张量秩,SNA施加的结构归纳偏置将高维映射分解为低元组分。可分离性未必是系统本身的属性:其常体现于表达系统所用的坐标或表示中。关键在于,这种坐标感知的表述揭示了混沌时空动力学与语言自回归之间的结构类比。通过将连续物理状态视为平滑可分离嵌入,SNA实现了混沌系统的分布建模。该方法在保持适用于离散序列的同时,缓解了确定性算子的非物理漂移特性。该方法的组合灵活性在四个领域得到验证:基于强化学习的自主航点导航、多功能微结构的逆向生成、湍流的分布建模以及神经语言建模。这些成果确立了可分离神经架构作为预测与生成智能的领域无关基元,能够统一确定性与分布性表示。