Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts. However, prompting often leads models to make predictions with lower accuracy compared to finetuning a model with ample training data. On the other hand, while finetuning LLMs on task-specific data generally improves their performance, abundant annotated datasets are not available for all tasks. Previous work has explored generating task-specific data from state-of-the-art LLMs and using this data to finetune smaller models, but this approach requires access to a language model other than the one being trained, which introduces cost, scalability challenges, and legal hurdles associated with continuously relying on more powerful LLMs. In response to these, we propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM, then use these input-output pairs to finetune the student LLM itself. In our empirical evaluation of the Natural Instructions V2 benchmark, we find that SELF-GUIDE improves the performance of LLM by a substantial margin. Specifically, we report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics. This sheds light on the promise of self-synthesized data guiding LLMs towards becoming task-specific experts without any external learning signals.
翻译:大型语言模型(LLM)在提供适当的自然语言提示时,有望解决多样化的任务。然而,与使用充足训练数据对模型进行微调相比,提示方法通常导致模型的预测准确率较低。另一方面,虽然在任务特定数据上对LLM进行微调通常能提升其性能,但并非所有任务都具备丰富的标注数据集。先前的研究探索了从最先进的LLM生成任务特定数据,并利用这些数据微调较小的模型,但该方法需要访问一个与被训练模型不同的语言模型,这带来了成本、可扩展性挑战以及与持续依赖更强大LLM相关的法律障碍。针对这些问题,我们提出了SELF-GUIDE,一种多阶段机制:首先从学生LLM合成任务特定的输入-输出对,然后利用这些输入-输出对微调学生LLM本身。在Natural Instructions V2基准测试的实证评估中,我们发现SELF-GUIDE显著提升了LLM的性能。具体而言,我们报告了在基准测试指标中,分类任务的绝对改进约为15%,生成任务的绝对改进约为18%。这揭示了自合成数据引导LLM成为任务特定专家的潜力,而无需任何外部学习信号。