Large language models (LLMs) demonstrate remarkable performance across various tasks, prompting researchers to develop diverse evaluation benchmarks. However, existing benchmarks typically measure the ability of LLMs to respond to individual questions, neglecting the complex interactions in real-world applications. In this paper, we introduce Compound Question Synthesis (CQ-Syn) to create the Compound-QA benchmark, focusing on compound questions with multiple sub-questions. This benchmark is derived from existing QA datasets, annotated with proprietary LLMs and verified by humans for accuracy. It encompasses five categories: Factual-Statement, Cause-and-Effect, Hypothetical-Analysis, Comparison-and-Selection, and Evaluation-and-Suggestion. It evaluates the LLM capability in terms of three dimensions including understanding, reasoning, and knowledge. Our assessment of eight open-source LLMs using Compound-QA reveals distinct patterns in their responses to compound questions, which are significantly poorer than those to non-compound questions. Additionally, we investigate various methods to enhance LLMs performance on compound questions. The results indicate that these approaches significantly improve the models' comprehension and reasoning abilities on compound questions.
翻译:大语言模型(LLM)在各种任务中展现出卓越性能,促使研究人员开发了多样化的评估基准。然而,现有基准通常衡量LLM回答单个问题的能力,忽视了实际应用中复杂的交互场景。本文提出复合问题合成方法(CQ-Syn),用于构建Compound-QA基准,重点关注包含多个子问题的复合问题。该基准源自现有问答数据集,通过专有LLM进行标注并经人工验证以确保准确性。它涵盖五个类别:事实陈述、因果关系、假设分析、比较选择以及评估建议。该基准从理解、推理和知识三个维度评估LLM的能力。我们使用Compound-QA对八个开源LLM进行评估,发现其在复合问题上的响应模式与非复合问题存在显著差异,且表现明显较差。此外,我们研究了多种提升LLM处理复合问题性能的方法。结果表明,这些方法能显著增强模型对复合问题的理解和推理能力。