Instruction following has catalyzed the recent era of Large Language Models (LLMs) and is the foundational skill underpinning more advanced capabilities such as reasoning and agentic behaviors. As tasks grow more challenging, the logic structures embedded in natural language instructions becomes increasingly intricate. However, how well LLMs perform on such logic-rich instructions remains under-explored. We propose LogicIFGen and LogicIFEval. LogicIFGen is a scalable, automated framework for generating verifiable instructions from code functions, which can naturally express rich logic such as conditions, loops, and function calls. We further curate a collection of complex code functions and use LogicIFGen to construct LogicIFEval, a benchmark comprising 426 verifiable logic-rich instructions. Our experiments demonstrate that current state-of-the-art LLMs still struggle to correctly follow the instructions in LogicIFEval. Most LLMs can only follow fewer than 60% of the instructions, revealing significant deficiencies in the instruction-following ability. Code and Benchmark: https://github.com/mianzhang/LogicIF
翻译:指令跟随能力推动了当前大语言模型(LLM)时代的发展,是支撑推理和智能体行为等更高级能力的基础技能。随着任务日益复杂,自然语言指令中蕴含的逻辑结构也变得越来越精细。然而,大语言模型在此类富含逻辑的指令上的表现仍有待深入探索。我们提出了LogicIFGen与LogicIFEval。LogicIFGen是一个可扩展的自动化框架,能够从代码函数生成可验证的指令,这些指令可以自然地表达丰富的逻辑,如条件、循环和函数调用。我们进一步整理了一组复杂的代码函数,并利用LogicIFGen构建了LogicIFEval——一个包含426条可验证的、富含逻辑指令的基准测试集。实验表明,当前最先进的大语言模型在正确遵循LogicIFEval中的指令方面仍面临困难。大多数大语言模型仅能正确跟随不到60%的指令,这揭示了其在指令跟随能力方面存在显著不足。代码与基准测试集:https://github.com/mianzhang/LogicIF