Following multiple instructions is a crucial ability for large language models (LLMs). Evaluating this ability comes with significant challenges: (i) limited coherence between multiple instructions, (ii) positional bias where the order of instructions affects model performance, and (iii) a lack of objectively verifiable tasks. To address these issues, we introduce a benchmark designed to evaluate models' abilities to follow multiple instructions through sequential instruction following (SIFo) tasks. In SIFo, the successful completion of multiple instructions is verifiable by examining only the final instruction. Our benchmark evaluates instruction following using four tasks (text modification, question answering, mathematics, and security rule following), each assessing different aspects of sequential instruction following. Our evaluation of popular LLMs, both closed-source and open-source, shows that more recent and larger models significantly outperform their older and smaller counterparts on the SIFo tasks, validating the benchmark's effectiveness. All models struggle with following sequences of instructions, hinting at an important lack of robustness of today's language models.
翻译:遵循多条指令是大语言模型(LLMs)的关键能力。评估这一能力面临重大挑战:(i)多条指令间缺乏连贯性;(ii)指令顺序影响模型性能的位置偏差;(iii)缺乏客观可验证的任务。为解决这些问题,我们提出了一个通过顺序指令遵循(SIFo)任务评估模型遵循多条指令能力的基准。在SIFo中,仅需检验最终指令即可验证多条指令的成功执行。我们的基准通过四项任务(文本修改、问答、数学运算及安全规则遵循)评估指令遵循能力,每项任务分别考察顺序指令遵循的不同维度。通过对主流闭源与开源LLMs的评估,我们发现较新且规模更大的模型在SIFo任务上显著优于较旧及较小规模的模型,验证了基准的有效性。所有模型在处理指令序列时均存在困难,这揭示了当前语言模型在鲁棒性方面存在重要缺陷。