While Transformers and other sequence-parallelizable neural network architectures seem like the current state of the art in sequence modeling, they specifically lack state-tracking capabilities. These are important for time-series tasks and logical reasoning. Traditional RNNs like LSTMs and GRUs, as well as modern variants like sLSTM do have these capabilities at the cost of strictly sequential processing. While this is often seen as a strong limitation, we show how fast these networks can get with our hardware-optimization FlashRNN in Triton and CUDA, optimizing kernels to the register level on modern GPUs. We extend traditional RNNs with a parallelization variant that processes multiple RNNs of smaller hidden state in parallel, similar to the head-wise processing in Transformers. To enable flexibility on different GPU variants, we introduce a new optimization framework for hardware-internal cache sizes, memory and compute handling. It models the hardware in a setting using polyhedral-like constraints, including the notion of divisibility. This speeds up the solution process in our ConstrINT library for general integer constraint satisfaction problems (integer CSPs). We show that our kernels can achieve 50x speed-ups over a vanilla PyTorch implementation and allow 40x larger hidden sizes compared to our Triton implementation. Our open-source kernels and the optimization library are released here to boost research in the direction of state-tracking enabled RNNs and sequence modeling: \url{https://github.com/NX-AI/flashrnn}
翻译:尽管Transformer及其他可序列并行化的神经网络架构似乎是当前序列建模的前沿技术,但它们尤其缺乏状态追踪能力。这种能力对于时间序列任务和逻辑推理至关重要。传统循环神经网络(如LSTM和GRU)以及现代变体(如sLSTM)确实具备这种能力,但代价是严格的顺序处理。虽然这常被视为严重限制,但我们展示了通过硬件优化的FlashRNN在Triton和CUDA上能达到的极速——我们在现代GPU上将内核优化至寄存器级别。我们扩展了传统循环神经网络,引入一种并行化变体,可并行处理多个较小隐藏状态的循环神经网络,类似于Transformer中的多头处理机制。为实现不同GPU变体的灵活性,我们提出了一种针对硬件内部缓存大小、内存与计算处理的新型优化框架。该框架采用类多面体约束(包括可除性概念)对硬件进行建模,从而加速了我们在通用整数约束满足问题(整数CSP)求解库ConstrINT中的求解过程。实验表明,我们的内核相比原生PyTorch实现可获得50倍加速,与Triton实现相比可支持40倍大的隐藏状态。我们已开源内核及优化库以推动具备状态追踪能力的循环神经网络与序列建模方向的研究:\url{https://github.com/NX-AI/flashrnn}