New LLM evaluation benchmarks are important to align with the rapid development of Large Language Models (LLMs). In this work, we present Chinese SimpleQA, the first comprehensive Chinese benchmark to evaluate the factuality ability of language models to answer short questions, and Chinese SimpleQA mainly has five properties (i.e., Chinese, Diverse, High-quality, Static, Easy-to-evaluate). Specifically, first, we focus on the Chinese language over 6 major topics with 99 diverse subtopics. Second, we conduct a comprehensive quality control process to achieve high-quality questions and answers, where the reference answers are static and cannot be changed over time. Third, following SimpleQA, the questions and answers are very short, and the grading process is easy-to-evaluate based on OpenAI API. Based on Chinese SimpleQA, we perform a comprehensive evaluation on the factuality abilities of existing LLMs. Finally, we hope that Chinese SimpleQA could guide the developers to better understand the Chinese factuality abilities of their models and facilitate the growth of foundation models.
翻译:新型大语言模型评估基准的构建对于适应大语言模型的快速发展至关重要。本研究提出中文SimpleQA——首个全面评估语言模型回答简短问题事实性能力的中文基准,该基准主要具备五项特征(即中文性、多样性、高质量、静态性与易评估性)。具体而言:首先,我们聚焦中文语境,涵盖6大主题领域下的99个细分主题。其次,我们实施了全面的质量控制流程以确保问题与答案的高质量,其中参考答案保持静态且不随时间改变。第三,沿袭SimpleQA的设计原则,所有问答内容均极为简练,并基于OpenAI API实现便捷的自动化评分流程。基于中文SimpleQA,我们对现有大语言模型的事实性能力进行了系统评估。最终,我们期望中文SimpleQA能够帮助开发者更准确地理解其模型的中文事实性能力,并促进基础模型的发展。