Many scientific and geometric problems exhibit general linear symmetries, yet most equivariant neural networks are built for compact groups or simple vector features, limiting their reuse on matrix-valued data such as covariances, inertias, or shape tensors. We introduce Reductive Lie Neurons (ReLNs), an exactly GL(n)-equivariant architecture that natively supports matrix-valued and Lie-algebraic features. ReLNs resolve a central stability issue for reductive Lie algebras by introducing a non-degenerate adjoint (conjugation)-invariant bilinear form, enabling principled nonlinear interactions and invariant feature construction in a single architecture that transfers across subgroups without redesign. We demonstrate ReLNs on algebraic tasks with sl(3) and sp(4) symmetries, Lorentz-equivariant particle physics, uncertainty-aware drone state estimation via joint velocity-covariance processing, learning from 3D Gaussian-splat representations, and EMLP double-pendulum benchmark spanning multiple symmetry groups. ReLNs consistently match or outperform strong equivariant and self-supervised baselines while using substantially fewer parameters and compute, improving the accuracy-efficiency trade-off and providing a practical, reusable backbone for learning with broad linear symmetries. Project page: https://reductive-lie-neuron.github.io/
翻译:许多科学与几何问题展现出一般线性对称性,然而大多数等变神经网络是为紧致群或简单向量特征构建的,限制了其在协方差、惯性张量或形状张量等矩阵值数据上的复用。我们提出了约化李神经元(ReLNs),这是一种精确GL(n)-等变的架构,原生支持矩阵值特征与李代数特征。ReLNs通过引入非退化伴随(共轭)不变双线性形式,解决了约化李代数的一个核心稳定性问题,从而在单一架构中实现了原则性的非线性交互与不变特征构建,且无需重新设计即可跨子群迁移。我们在具有sl(3)与sp(4)对称性的代数任务、洛伦兹等变的粒子物理、通过联合速度-协方差处理的无人机不确定性状态估计、从3D高斯泼溅表示中学习,以及跨越多个对称群的EMLP双摆基准测试上验证了ReLNs。ReLNs在显著减少参数量与计算量的同时,始终匹配或优于强等变与自监督基线,改善了精度-效率权衡,并为具有广泛线性对称性的学习提供了一个实用、可复用的骨干网络。项目页面:https://reductive-lie-neuron.github.io/