Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities. In-Context Learning (ICL) and Parameter-Efficient Fine-Tuning (PEFT) are currently two mainstream methods for augmenting LLMs to downstream tasks. ICL typically constructs a few-shot learning scenario, either manually or by setting up a Retrieval-Augmented Generation (RAG) system, helping models quickly grasp domain knowledge or question-answering patterns without changing model parameters. However, this approach involves trade-offs, such as slower inference speed and increased space occupancy. PEFT assists the model in adapting to tasks through minimal parameter modifications, but the training process still demands high hardware requirements, even with a small number of parameters involved. To address these challenges, we propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning, maintaining low inference costs. RTD constructs a reference datastore from the provided training examples and optimizes the LLM's final vocabulary distribution by flexibly selecting suitable references based on the input, resulting in more trustable responses and enabling the model to adapt to downstream tasks at a low cost. Experimental evaluations on various LLMs using different benchmarks demonstrate that RTD establishes a new paradigm for augmenting models to downstream tasks. Furthermore, our method exhibits strong orthogonality with traditional methods, allowing for concurrent usage. Our code can be found at https://github.com/ShiLuohe/ReferenceTrustableDecoding
翻译:大型语言模型(LLMs)已迅速发展并展现出卓越能力。上下文学习(ICL)与参数高效微调(PEFT)是目前增强LLMs适应下游任务的两种主流方法。ICL通常通过人工构建或检索增强生成(RAG)系统建立少样本学习场景,帮助模型在不改变参数的情况下快速掌握领域知识或问答模式。然而,该方法存在推理速度降低与空间占用增加等权衡问题。PEFT通过最小化参数修改辅助模型适应任务,但即使涉及少量参数,训练过程仍对硬件资源有较高要求。为解决这些挑战,我们提出参考可信解码(RTD),该范式使模型无需微调即可快速适应新任务,同时保持较低推理成本。RTD从提供的训练样本构建参考数据存储库,并根据输入灵活选取合适参考以优化LLM的最终词汇分布,从而生成更可信的响应,实现模型以低成本适应下游任务。基于不同基准测试对多种LLMs的实验评估表明,RTD建立了增强模型适应下游任务的新范式。此外,本方法与传统技术具有强正交性,可并行使用。代码发布于https://github.com/ShiLuohe/ReferenceTrustableDecoding