Knowledge utilization is a critical aspect of LLMs, and understanding how they adapt to evolving knowledge is essential for their effective deployment. However, existing benchmarks are predominantly static, failing to capture the evolving nature of LLMs and knowledge, leading to inaccuracies and vulnerabilities such as contamination. In this paper, we introduce EvoWiki, an evolving dataset designed to reflect knowledge evolution by categorizing information into stable, evolved, and uncharted states. EvoWiki is fully auto-updatable, enabling precise evaluation of continuously changing knowledge and newly released LLMs. Through experiments with Retrieval-Augmented Generation (RAG) and Contunual Learning (CL), we evaluate how effectively LLMs adapt to evolving knowledge. Our results indicate that current models often struggle with evolved knowledge, frequently providing outdated or incorrect responses. Moreover, the dataset highlights a synergistic effect between RAG and CL, demonstrating their potential to better adapt to evolving knowledge. EvoWiki provides a robust benchmark for advancing future research on the knowledge evolution capabilities of large language models.
翻译:知识利用是大语言模型(LLM)的一个关键方面,理解它们如何适应不断演化的知识对于其有效部署至关重要。然而,现有的基准测试大多是静态的,未能捕捉LLM和知识本身的演化特性,从而导致评估不准确并引发诸如知识污染等脆弱性问题。本文介绍了EvoWiki,这是一个演化数据集,通过将信息分类为稳定、已演化和未探索三种状态来反映知识的演化过程。EvoWiki具备完全自动更新能力,能够对持续变化的知识和新发布的LLM进行精确评估。通过结合检索增强生成(RAG)和持续学习(CL)的实验,我们评估了LLM适应演化知识的有效性。我们的结果表明,当前模型在处理已演化知识时常常表现不佳,频繁提供过时或错误的回答。此外,该数据集揭示了RAG与CL之间的协同效应,证明了它们能够更好地适应演化知识。EvoWiki为推进未来关于大语言模型知识演化能力的研究提供了一个稳健的基准。