Linear recurrent neural networks (LRNNs) provide a structured approach to sequence modeling that bridges classical linear dynamical systems and modern deep learning, offering both expressive power and theoretical guarantees on stability and trainability. In recent years, multiple LRNN-based architectures have been proposed, each introducing distinct parameterizations, discretization schemes, and implementation constraints. However, existing implementations are fragmented across different software frameworks, often rely on framework-specific optimizations, and in some cases require custom CUDA kernels or lack publicly available code altogether. As a result, using, comparing, or extending LRNNs requires substantial implementation effort. To address this, we introduce $\texttt{lrnnx}$, a unified software library that implements several modern LRNN architectures under a common interface. The library exposes multiple levels of control, allowing users to work directly with core components or higher-level model abstractions. $\texttt{lrnnx}$ aims to improve accessibility, reproducibility, and extensibility of LRNN research and applications. We make our code available under a permissive MIT license.
翻译:线性循环神经网络(LRNN)为序列建模提供了一种结构化方法,它连接了经典的线性动态系统和现代深度学习,在表达能力和稳定性、可训练性的理论保证方面均提供了优势。近年来,多种基于LRNN的架构被提出,每种架构都引入了不同的参数化方法、离散化方案和实现约束。然而,现有的实现分散在不同的软件框架中,通常依赖于特定框架的优化,在某些情况下需要自定义CUDA内核,或者完全缺乏公开可用的代码。因此,使用、比较或扩展LRNN需要大量的实现工作。为解决此问题,我们引入了$\texttt{lrnnx}$,这是一个统一的软件库,它在通用接口下实现了多种现代LRNN架构。该库提供了多级控制,允许用户直接操作核心组件或更高级别的模型抽象。$\texttt{lrnnx}$旨在提高LRNN研究和应用的可访问性、可复现性和可扩展性。我们的代码以宽松的MIT许可证提供。