Long text generation, such as novel writing and discourse-level translation with extremely long contexts, presents significant challenges to current language models. Existing methods mainly focus on extending the model's context window through strategies like length extrapolation. However, these approaches demand substantial hardware resources during the training and/or inference phases. Our proposed method, Temp-Lora, introduces an alternative concept. Instead of relying on the KV cache to store all context information, we embeds this information directly into a temporary Lora module. In the process of long text generation, this module is progressively trained with text generated previously. This approach not only efficiently preserves contextual knowledge but also prevents any permanent alteration to the model's parameters given that the module is discarded post-generation. Extensive experiments on the PG19 language modeling benchmark and the GuoFeng discourse-level translation benchmark validate the effectiveness of Temp-Lora. Our results show that: 1) Temp-Lora substantially enhances generation quality for long text, as indicated by a 13.2% decrease in perplexity (PPL) on a subset of PG19, and a 29.3% decrease in PPL along with a 113.2% increase in BLEU score on a subset of GuoFeng, 2) Temp-Lora is compatible with and enhances most existing long text generation methods, and 3) Temp-Lora can greatly reduce computational costs by shortening the context window. For example, we can ensure a moderate improvement in generation quality (a decrease of 3.8% in PPL) while enabling a 51.5% memory usage reduction and a 60.0% decrease in latency for inference.
翻译:长文本生成,例如小说创作和具有极长上下文的语篇级翻译,对当前的语言模型提出了重大挑战。现有方法主要侧重于通过长度外推等策略扩展模型的上下文窗口。然而,这些方法在训练和/或推理阶段需要大量的硬件资源。我们提出的方法 Temp-Lora 引入了一种替代思路。该方法不依赖 KV 缓存来存储所有上下文信息,而是将这些信息直接嵌入到一个临时的 Lora 模块中。在长文本生成过程中,该模块会利用先前生成的文本进行渐进式训练。这种方法不仅高效地保留了上下文知识,而且由于该模块在生成后即被丢弃,因此不会对模型的参数造成任何永久性改变。在 PG19 语言建模基准和 GuoFeng 语篇级翻译基准上进行的大量实验验证了 Temp-Lora 的有效性。我们的结果表明:1) Temp-Lora 显著提升了长文本的生成质量,具体体现在 PG19 子集上的困惑度(PPL)降低了 13.2%,以及在 GuoFeng 子集上的 PPL 降低了 29.3% 同时 BLEU 分数提升了 113.2%;2) Temp-Lora 与大多数现有的长文本生成方法兼容并能增强其效果;3) Temp-Lora 可以通过缩短上下文窗口来大幅降低计算成本。例如,我们可以在确保生成质量适度提升(PPL 降低 3.8%)的同时,实现 51.5% 的内存使用量减少和 60.0% 的推理延迟降低。