With the recent surge in the development of large language models, the need for comprehensive and language-specific evaluation benchmarks has become critical. While significant progress has been made in evaluating English language models, benchmarks for other languages, particularly those with unique linguistic characteristics such as Turkish, remain less developed. Our study introduces TurkBench, a comprehensive benchmark designed to assess the capabilities of generative large language models in the Turkish language. TurkBench involves 8,151 data samples across 21 distinct subtasks. These are organized under six main categories of evaluation: Knowledge, Language Understanding, Reasoning, Content Moderation, Turkish Grammar and Vocabulary, and Instruction Following. The diverse range of tasks and the culturally relevant data would provide researchers and developers with a valuable tool for evaluating their models and identifying areas for improvement. We further publish our benchmark for online submissions at https://huggingface.co/turkbench
翻译:随着大语言模型的快速发展,全面且面向特定语言的评估基准变得至关重要。尽管在英语语言模型评估方面已取得显著进展,但针对其他语言(尤其是具有独特语言特征的语言,如土耳其语)的评估基准仍相对欠缺。本研究提出了TurkBench,这是一个旨在评估生成式大语言模型在土耳其语中能力的综合性基准。TurkBench包含21个不同的子任务,共计8,151个数据样本。这些任务被组织在六个主要评估类别下:知识、语言理解、推理、内容审核、土耳其语语法与词汇以及指令遵循。多样化的任务范围及文化相关的数据将为研究人员和开发者提供一个有价值的工具,用于评估其模型并识别改进方向。我们进一步在https://huggingface.co/turkbench发布了支持在线提交的基准平台。