Managing long texts is challenging for large language models (LLMs) due to limited context window sizes. This study introduces UIO-LLMs, an unbiased incremental optimization approach for memory-enhanced transformers under long-context settings. We initially conceptualize the process as a streamlined encoder-decoder framework where the weights-shared encoder and decoder respectively encapsulate a context segment into memories and leverage these memories to predict outputs of the subsequent segment. Subsequently, by treating our memory-enhanced transformers as fully-connected recurrent neural networks (RNNs), we refine the training process using the Truncated Backpropagation Through Time (TBPTT) algorithm, which incorporates innovative incremental optimization techniques. These techniques not only diminish time complexity but also address the bias in gradient computation through an unbiased optimization process. UIO-LLMs successfully handle long context, such as extending the context window of Llama2-7b-chat from 4K to 100K tokens with minimal 2% additional parameters, while keeping the inference cost nearly linear as context length increases.
翻译:由于上下文窗口大小有限,处理长文本对大语言模型而言是一项挑战。本研究提出了UIO-LLMs,一种专为长上下文设置下内存增强型Transformer设计的无偏增量优化方法。我们首先将该过程概念化为一个简化的编码器-解码器框架,其中权重共享的编码器和解码器分别将上下文片段封装为记忆,并利用这些记忆来预测后续片段的输出。随后,通过将我们的内存增强型Transformer视为全连接循环神经网络,我们采用截断时间反向传播算法来优化训练过程,该算法融合了创新的增量优化技术。这些技术不仅降低了时间复杂度,还通过无偏优化过程解决了梯度计算中的偏差问题。UIO-LLMs能够成功处理长上下文,例如将Llama2-7b-chat的上下文窗口从4K扩展到100K词元,仅需增加约2%的参数,同时保持推理成本随上下文长度增长近乎线性。