Large Language Models (LLMs) have shown exceptional performance across various Data-to-Text Generation (DTG) tasks. However, generating factually consistent text in DTG remains challenging for LLMs. Despite this, in-depth evaluations of LLM factual consistency for DTG remain missing in the current literature. This paper addresses this gap by providing an extensive evaluation of factual consistency in LLMs for DTG. Our evaluation covers five widely used DTG datasets (E2E, ViGGo, WikiTableText, DART, and WebNLG) and five prominent LLM families (T5, BART, OPT, BLOOM, and Llama 2). To ensure a thorough evaluation of factual consistency, we use four state-of-the-art automatic metrics and include essential human assessments. Our extensive evaluations reveals three key findings regarding factual consistency in LLMs for DTG. First, Llama 2 often excels in generating factually consistent text, although smaller models like T5 and BART can achieve strong factual consistency on larger, lexically less-diverse datasets. Second, the average rate of change (AROC) indicates that increasing model size (number of model trainable parameters) generally enhances factual consistency of LLMs in DTG. Third, we observe that source-reference divergence (i.e., when the reference text diverges semantically from the source) typically reduces the factual consistency of LLMs in DTG.
翻译:大规模语言模型(LLMs)在各类数据到文本生成(DTG)任务中展现出卓越性能。然而,在DTG任务中生成事实一致的文本对LLMs而言仍具挑战性。尽管存在这一问题,当前文献中仍缺乏针对DTG任务中LLMs事实一致性的深入评估。本文通过开展DTG任务中LLMs事实一致性的系统性评估来填补这一空白。我们的评估涵盖五个广泛使用的DTG数据集(E2E、ViGGo、WikiTableText、DART和WebNLG)以及五个主流LLM系列(T5、BART、OPT、BLOOM和Llama 2)。为确保事实一致性评估的全面性,我们采用四种前沿自动评估指标并纳入必要的人工评估。系统性评估揭示了DTG任务中LLMs事实一致性的三个关键发现:首先,Llama 2在生成事实一致文本方面表现突出,但T5和BART等较小模型在规模较大、词汇多样性较低的数据集上也能实现较强的事实一致性;其次,平均变化率(AROC)表明增加模型规模(可训练参数量)通常能提升LLMs在DTG任务中的事实一致性;第三,我们观察到源参考文本差异(即参考文本与源数据语义偏离)通常会降低LLMs在DTG任务中的事实一致性。