Recent efforts have evaluated large language models (LLMs) in areas such as commonsense reasoning, mathematical reasoning, and code generation. However, to the best of our knowledge, no work has specifically investigated the performance of LLMs in natural language generation (NLG) tasks, a pivotal criterion for determining model excellence. Thus, this paper conducts a comprehensive evaluation of well-known and high-performing LLMs, namely ChatGPT, ChatGLM, T5-based models, LLaMA-based models, and Pythia-based models, in the context of NLG tasks. We select English and Chinese datasets encompassing Dialogue Generation and Text Summarization. Moreover, we propose a common evaluation setting that incorporates input templates and post-processing strategies. Our study reports both automatic results, accompanied by a detailed analysis.
翻译:近期研究已经在常识推理、数学推理和代码生成等领域对大型语言模型(LLMs)进行了评估。然而,据我们所知,尚没有工作专门探讨LLMs在自然语言生成(NLG)任务中的表现,而这是评估模型优劣的关键标准。因此,本文对知名且性能优异的LLMs(包括ChatGPT、ChatGLM、基于T5的模型、基于LLaMA的模型和基于Pythia的模型)在NLG任务中进行了全面评估。我们选取了涵盖对话生成和文本摘要的英文及中文数据集。此外,我们提出了一种通用的评估设置,包含输入模板和后处理策略。本研究报告了自动化评估结果,并附有详细分析。