New LLM evaluation benchmarks are important to align with the rapid development of Large Language Models (LLMs). In this work, we present Chinese SimpleQA, the first comprehensive Chinese benchmark to evaluate the factuality ability of language models to answer short questions, and Chinese SimpleQA mainly has five properties (i.e., Chinese, Diverse, High-quality, Static, Easy-to-evaluate). Specifically, first, we focus on the Chinese language over 6 major topics with 99 diverse subtopics. Second, we conduct a comprehensive quality control process to achieve high-quality questions and answers, where the reference answers are static and cannot be changed over time. Third, following SimpleQA, the questions and answers are very short, and the grading process is easy-to-evaluate based on OpenAI API. Based on Chinese SimpleQA, we perform a comprehensive evaluation on the factuality abilities of existing LLMs. Finally, we hope that Chinese SimpleQA could guide the developers to better understand the Chinese factuality abilities of their models and facilitate the growth of foundation models.
翻译:新的大语言模型评估基准对于紧跟大语言模型的快速发展至关重要。本研究提出了中文SimpleQA,这是首个全面评估语言模型回答简短问题的事实性能力的中文基准,该基准主要具备五项特性(即中文性、多样性、高质量、静态性、易评估性)。具体而言,首先,我们聚焦于中文,覆盖6个主要主题及99个多样化子主题。其次,我们实施了全面的质量控制流程以确保问题与答案的高质量,其中参考答案保持静态且不随时间改变。第三,遵循SimpleQA的设计原则,问题与答案均非常简短,且基于OpenAI API的评分流程易于执行。基于中文SimpleQA,我们对现有大语言模型的事实性能力进行了全面评估。最后,我们希望中文SimpleQA能够帮助开发者更好地理解其模型的中文事实性能力,并促进基础模型的发展。