Instruction tuning -- fine-tuning a large language model (LLM) on pairs of instructions and desired outcomes -- is an approach that enables pre-trained language models to perform real-world tasks and follow human instructions. Its practical success depends on the model learning a broader set of instructions than those it was trained on. Yet the factors that determine model generalization to such \emph{unseen tasks} are not well understood. %To understand the driving factors of generalization, In this paper, we experiment with string rewrites, a symbolic task that serves as a building block for Turing complete Markov algorithms while allowing experimental control of "inputs" and "instructions". We investigate the trade-off between the number of instructions the model is trained on and the number of training samples provided for each instruction and observe that the diversity of the instruction set determines generalization. Generalization emerges once a diverse enough set of tasks is provided, even though very few examples are provided for each task. Instruction diversity also ensures robustness with respect to non-uniform distributions of instructions in the training set.
翻译:指令微调——在大语言模型上对指令与期望输出对进行微调——是一种使预训练语言模型能够执行现实世界任务并遵循人类指令的方法。其实际成功取决于模型学习比训练时更广泛的指令集。然而,决定模型泛化到此类"未见任务"的因素尚未得到充分理解。本文中,我们以字符串重写为实验对象——这是一种作为图灵完备马尔可夫算法构建基块的符号任务,同时能对"输入"和"指令"进行实验控制。我们研究了模型训练指令数量与每个指令提供的训练样本数量之间的权衡关系,并观察到指令集的多样性决定了泛化能力。只要提供了足够多样化的任务集,即使每个任务只有极少量训练样本,泛化能力也会涌现。指令多样性还能确保模型对训练集中非均匀指令分布的鲁棒性。