With advancements in self-supervised learning, the availability of trillions tokens in a pre-training corpus, instruction fine-tuning, and the development of large Transformers with billions of parameters, large language models (LLMs) are now capable of generating factual and coherent responses to human queries. However, the mixed quality of training data can lead to the generation of undesired responses, presenting a significant challenge. Over the past two years, various methods have been proposed from different perspectives to enhance LLMs, particularly in aligning them with human expectation. Despite these efforts, there has not been a comprehensive survey paper that categorizes and details these approaches. In this work, we aim to address this gap by categorizing these papers into distinct topics and providing detailed explanations of each alignment method, thereby helping readers gain a thorough understanding of the current state of the field.
翻译:随着自监督学习的进步、预训练语料库中数万亿词元的可用性、指令微调的发展以及具有数十亿参数的大型Transformer的出现,大型语言模型(LLMs)现已能够针对人类查询生成事实准确且连贯的响应。然而,训练数据质量的参差不齐可能导致生成不期望的响应,这构成了一个重大挑战。过去两年中,研究者从不同角度提出了多种方法来增强LLMs,特别是在使其与人类期望对齐方面。尽管已有这些努力,目前仍缺乏一篇对这些方法进行分类和详细阐述的全面综述论文。本工作旨在填补这一空白,通过将相关文献归类至不同主题并对每种对齐方法进行详细解释,从而帮助读者全面理解该领域的当前发展状况。