The advancement of large language models (LLMs) has led to a greater challenge of having a rigorous and systematic evaluation of complex tasks performed, especially in enterprise applications. Therefore, LLMs need to be able to benchmark enterprise datasets for various tasks. This work presents a systematic exploration of benchmarking strategies tailored to LLM evaluation, focusing on the utilization of domain-specific datasets and consisting of a variety of NLP tasks. The proposed evaluation framework encompasses 25 publicly available datasets from diverse enterprise domains like financial services, legal, cyber security, and climate and sustainability. The diverse performance of 13 models across different enterprise tasks highlights the importance of selecting the right model based on the specific requirements of each task. Code and prompts are available on GitHub.
翻译:大语言模型(LLM)的发展使得对复杂任务(尤其是在企业应用中执行的任务)进行严谨且系统化的评估面临更大挑战。因此,LLM需要能够针对各类任务对企业级数据集进行基准测试。本研究系统性地探索了专为LLM评估定制的基准测试策略,重点关注领域特定数据集的运用,并涵盖多种自然语言处理任务。所提出的评估框架整合了来自金融服务、法律、网络安全及气候与可持续发展等不同企业领域的25个公开数据集。13个模型在不同企业任务上的差异化表现,突显了根据具体任务需求选择合适模型的重要性。相关代码与提示词已在GitHub上开源。