With the recent surge in the development of large language models, the need for comprehensive and language-specific evaluation benchmarks has become critical. While significant progress has been made in evaluating English-language models, benchmarks for other languages, particularly those with unique linguistic characteristics such as Turkish, remain less developed. Our study introduces TurkBench, a comprehensive benchmark designed to assess the capabilities of generative large language models in the Turkish language. TurkBench involves 8,151 data samples across 21 distinct subtasks. These are organized under six main categories of evaluation: Knowledge, Language Understanding, Reasoning, Content Moderation, Turkish Grammar and Vocabulary, and Instruction Following. The diverse range of tasks and the culturally relevant data would provide researchers and developers with a valuable tool for evaluating their models and identifying areas for improvement. We further publish our benchmark for online submissions at https://huggingface.co/turkbench
翻译:随着大语言模型开发的近期热潮,全面且针对特定语言的评估基准变得至关重要。尽管在评估英语语言模型方面已取得显著进展,但针对其他语言,特别是那些具有独特语言特征(如土耳其语)的基准仍然不够完善。本研究介绍了TurkBench,这是一个旨在评估土耳其语生成式大语言模型能力的综合性基准。TurkBench包含21个不同的子任务,共计8,151个数据样本。这些样本被组织在六个主要评估类别下:知识、语言理解、推理、内容审核、土耳其语语法与词汇以及指令遵循。多样化的任务范围和文化相关的数据将为研究人员和开发者提供一个宝贵的工具,用于评估其模型并识别改进领域。我们进一步在https://huggingface.co/turkbench上发布了我们的基准以支持在线提交。