Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities. In-Context Learning (ICL) and Parameter-Efficient Fine-Tuning (PEFT) are currently two mainstream methods for augmenting LLMs to downstream tasks. ICL typically constructs a few-shot learning scenario, either manually or by setting up a Retrieval-Augmented Generation (RAG) system, helping models quickly grasp domain knowledge or question-answering patterns without changing model parameters. However, this approach involves trade-offs, such as slower inference speed and increased space occupancy. PEFT assists the model in adapting to tasks through minimal parameter modifications, but the training process still demands high hardware requirements, even with a small number of parameters involved. To address these challenges, we propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning, maintaining low inference costs. RTD constructs a reference datastore from the provided training examples and optimizes the LLM's final vocabulary distribution by flexibly selecting suitable references based on the input, resulting in more trustable responses and enabling the model to adapt to downstream tasks at a low cost. Experimental evaluations on various LLMs using different benchmarks demonstrate that RTD establishes a new paradigm for augmenting models to downstream tasks. Furthermore, our method exhibits strong orthogonality with traditional methods, allowing for concurrent usage.
翻译:大型语言模型(LLMs)已迅速发展并展现出卓越的能力。上下文学习(ICL)与参数高效微调(PEFT)是目前增强LLMs以适应下游任务的两种主流方法。ICL通常通过人工构建或建立检索增强生成(RAG)系统来创建少样本学习场景,帮助模型在不改变模型参数的情况下快速掌握领域知识或问答模式。然而,这种方法存在权衡,例如推理速度较慢和空间占用增加。PEFT通过最小化的参数修改帮助模型适应任务,但即使涉及少量参数,训练过程仍对硬件有较高要求。为应对这些挑战,我们提出参考可信解码(RTD),这是一种无需微调即可使模型快速适应新任务、同时保持低推理成本的范式。RTD从提供的训练样本中构建参考数据存储,并根据输入灵活选择合适的参考来优化LLM的最终词汇分布,从而生成更可信的响应,并使模型能够以低成本适应下游任务。在不同基准测试中对多种LLMs进行的实验评估表明,RTD为增强模型以适应下游任务建立了新的范式。此外,我们的方法与传统方法表现出很强的正交性,允许并行使用。