Existing benchmarks are becoming saturated and struggle to separate model performances due to factors like data contamination and advancing LLM capabilities. This paper introduces EMDM (Enhanced Model Differentiation Metric), a novel weighted metric that revitalizes benchmarks by enhancing model separation. EMDM integrates final answer and Chain-of-Thought (CoT) reasoning correctness, assigning weights based on the complexity and reasoning depth required to solve a given sample in the evaluation data. Using a baseline LLM in two setups-Unguided, where the model has no prior exposure to test samples, and Guided, where the model has prior knowledge of the desired answer-EMDM distinguishes instances of varying difficulty. The CoT and answer correctness from these setups inform an optimization objective for weight assignment, resulting in a more nuanced evaluation of model performance. Compared to the exact match (EM) metric, which achieves 17% separation on ARC-Challenge, EMDM achieves 46%, demonstrating its effectiveness in differentiating models based on reasoning and knowledge requirements.
翻译:现有基准测试因数据污染和大型语言模型能力提升等因素正趋于饱和,难以有效区分模型性能。本文提出EMDM(增强模型区分度量),这是一种通过增强模型区分度来重振基准测试的新型加权度量方法。EMDM整合了最终答案与思维链推理的正确性,根据评估数据中解决给定样本所需的复杂性和推理深度分配权重。通过使用基线大型语言模型在两种设置下——无引导(模型未预先接触测试样本)和有引导(模型预先知晓期望答案)——EMDM能够区分不同难度的实例。基于这两种设置得到的思维链与答案正确性信息,我们构建了权重分配的优化目标,从而实现对模型性能更精细的评估。与在ARC-Challenge数据集上仅实现17%区分度的精确匹配度量相比,EMDM达到了46%的区分度,证明了其在基于推理和知识需求区分模型方面的有效性。