Reliably ensuring Large Language Models (LLMs) follow complex instructions is a critical challenge, as existing benchmarks often fail to reflect real-world use or isolate compliance from task success. We introduce MOSAIC (MOdular Synthetic Assessment of Instruction Compliance), a modular framework that uses a dynamically generated dataset with up to 20 application-oriented generation constraints to enable a granular and independent analysis of this capability. Our evaluation of five LLMs from different families based on this new benchmark demonstrates that compliance is not a monolithic capability but varies significantly with constraint type, quantity, and position. The analysis reveals model-specific weaknesses, uncovers synergistic and conflicting interactions between instructions, and identifies distinct positional biases such as primacy and recency effects. These granular insights are critical for diagnosing model failures and developing more reliable LLMs for systems that demand strict adherence to complex instructions.
翻译:确保大型语言模型可靠遵循复杂指令是一项关键挑战,因为现有基准往往无法反映实际应用场景或将遵循能力与任务成功相分离。我们提出了MOSAIC(模块化指令遵循合成评估框架),这是一个模块化框架,通过使用动态生成的数据集(包含多达20个面向应用的生成约束),实现对这一能力的细粒度独立分析。基于这一新基准对来自不同架构的五个大型语言模型进行评估表明:指令遵循并非单一能力,其表现随约束类型、数量和位置呈现显著差异。分析揭示了模型特有的弱点,发现了指令间协同与冲突的相互作用,并识别出首因效应与近因效应等显著的位置偏差。这些细粒度洞察对于诊断模型失效机制、以及为需要严格遵循复杂指令的系统开发更可靠的大型语言模型至关重要。