The increasing prevalence of large language models (LLMs) such as GPT-4 in various applications has led to a surge in the size of prompts required for optimal performance, leading to challenges in computational efficiency. Prompt compression aims to reduce the inference cost by minimizing input tokens without compromising on the task performance. However, existing prompt compression techniques either rely on sub-optimal metrics such as information entropy or model it as a task-agnostic token classification problem that fails to capture task-specific information. To address these issues, we propose a novel and efficient reinforcement learning (RL) based task-aware prompt compression method. To ensure low latency requirements, we leverage existing Transformer encoder-based token classification model while guiding the learning process with task-specific reward signals using lightweight REINFORCE algorithm. We evaluate the performance of our method on three diverse and challenging tasks including text summarization, question answering and code summarization. We demonstrate that our RL-guided compression method improves the task performance by 8% - 189% across these three scenarios over state-of-the-art compression techniques while satisfying the same compression rate and latency requirements.
翻译:随着GPT-4等大语言模型(LLMs)在各种应用中的日益普及,为达到最优性能所需的提示规模急剧增长,这给计算效率带来了挑战。提示压缩旨在通过最小化输入标记来降低推理成本,同时不影响任务性能。然而,现有的提示压缩技术要么依赖于信息熵等次优度量,要么将其建模为与任务无关的标记分类问题,无法捕捉任务特定信息。为解决这些问题,我们提出了一种新颖且高效的基于强化学习(RL)的任务感知提示压缩方法。为满足低延迟要求,我们利用现有的基于Transformer编码器的标记分类模型,同时使用轻量级REINFORCE算法通过任务特定的奖励信号来指导学习过程。我们在三个多样且具有挑战性的任务上评估了该方法的性能,包括文本摘要、问答和代码摘要。实验表明,在满足相同压缩率和延迟要求的前提下,与最先进的压缩技术相比,我们基于强化学习的压缩方法在这三种场景下的任务性能提升了8%至189%。