Large language models (LLMs) have played a fundamental role in various natural language processing tasks with powerful prompt techniques. However, in real-world applications, there are often similar prompt components for repeated queries, which causes significant computational burdens during inference. Existing prompt compression and direct fine-tuning methods aim to tackle these challenges, yet they frequently struggle to strike an optimal balance between cost-efficiency and performance effectiveness, especially in complex tasks such as NL2Code. In this paper, we propose a novel method namely PromptIntern to internalize the prompt knowledge into model parameters via progressive fine-tuning. Our method enables LLMs to emulate the human learning process for a new task, where detailed templates and examples in a prompt are gradually internalized and phased out progressively as the model grows accustomed to the task. Extensive experiments demonstrate that our method reduces inference tokens over 90%, speedups inference by 4.2 times, and saves 88.3% monetary cost.
翻译:大语言模型(LLM)凭借强大的提示技术,在各种自然语言处理任务中发挥着基础性作用。然而,在实际应用中,重复查询往往包含相似的提示组件,这导致推理过程中产生巨大的计算负担。现有的提示压缩和直接微调方法旨在应对这些挑战,但它们常常难以在成本效益与性能效果之间取得最佳平衡,尤其是在诸如NL2Code之类的复杂任务中。本文提出了一种名为PromptIntern的新方法,通过渐进式微调将提示知识内化到模型参数中。我们的方法使LLM能够模拟人类学习新任务的过程,即随着模型逐渐适应任务,提示中的详细模板和示例被逐步内化并分阶段淘汰。大量实验表明,我们的方法减少了超过90%的推理令牌,推理速度提升了4.2倍,并节省了88.3%的经济成本。