New Natural Langauge Process~(NLP) benchmarks are urgently needed to align with the rapid development of large language models (LLMs). We present Xiezhi, the most comprehensive evaluation suite designed to assess holistic domain knowledge. Xiezhi comprises multiple-choice questions across 516 diverse disciplines ranging from 13 different subjects with 249,587 questions and accompanied by Xiezhi-Specialty and Xiezhi-Interdiscipline, both with 15k questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results indicate that LLMs exceed average performance of humans in science, engineering, agronomy, medicine, and art, but fall short in economics, jurisprudence, pedagogy, literature, history, and management. We anticipate Xiezhi will help analyze important strengths and shortcomings of LLMs, and the benchmark is released in~\url{https://github.com/MikeGu721/XiezhiBenchmark}.
翻译:新的自然语言处理(NLP)基准亟需与大型语言模型(LLM)的快速发展保持同步。我们提出谢知(Xiezhi)——目前最全面的评估套件,专为评估全方位领域知识而设计。谢知包含来自13个不同学科门类、516个学科的249,587道选择题,并配套提供谢知-专业版(Xiezhi-Specialty)和谢知-跨学科版(Xiezhi-Interdiscipline),各含15,000道题。我们基于谢知对47个前沿LLM进行了评估。结果表明,LLM在理学、工学、农学、医学和艺术学领域的平均表现已超越人类,但在经济学、法学、教育学、文学、历史学和管理学领域仍存在差距。我们期待谢知有助于分析LLM的关键优势与不足,该基准已发布于\url{https://github.com/MikeGu721/XiezhiBenchmark}。