Multiple-choice benchmarks, consisting of various prompts and choices, are among the most widely used methods to assess a language model's natural language understanding capability. Given a specific prompt, we typically compute $P(Choice|Prompt)$ to evaluate how likely a language model is to generate the correct choice compared to incorrect ones. However, we observe that performance measured using this approach reflects not only the model's comprehension of the prompt but also its inherent biases for certain choices regardless of the prompt. This issue makes it challenging to accurately measure a model's natural language understanding, as models may select the answer without fully understanding the prompt. To address this limitation, we propose a novel metric called ANPMI, which normalizes Pointwise Mutual Information (PMI) by $-\log P(Choice)$. ANPMI provides a more accurate assessment of the model's natural language understanding by ensuring that it is challenging to answer a question without properly understanding the prompt.
翻译:多项选择题基准测试由各种提示和选项组成,是评估语言模型自然语言理解能力最广泛使用的方法之一。给定特定提示,我们通常计算$P(Choice|Prompt)$来评估语言模型生成正确选项相较于错误选项的可能性。然而,我们观察到,使用这种方法测量的性能不仅反映了模型对提示的理解,还反映了其对某些选项的内在偏好(无论提示内容如何)。这一问题使得准确衡量模型的自然语言理解能力变得困难,因为模型可能在没有完全理解提示的情况下选择答案。为克服这一局限,我们提出了一种名为ANPMI的新度量指标,该指标通过$-\log P(Choice)$对点互信息(PMI)进行归一化处理。ANPMI通过确保模型在未正确理解提示的情况下难以回答问题,从而更准确地评估模型的自然语言理解能力。