Large language models (LLMs) tend to generate homogenous texts, which may impact the diversity of knowledge generated across different outputs. Given their potential to replace existing forms of knowledge acquisition, this poses a risk of knowledge collapse, where homogenous LLMs may lead most people to be exposed to largely the same information, thus mediating a shrinking in the range of accessible information over time as underepresented knowledge is forgotten. To assess the risk of knowledge collapse with LLMs, we present a new methodology to measure epistemic diversity, i.e., variation in real-world claims in LLM outputs. We use this to perform a broad empirical study testing 27 LLMs, 155 topics covering 12 countries, and 200 prompt templates sourced from real user chats. For the topics in our study, we show that while newer models tend to generate more diverse claims, all models are less epistemically diverse than a basic web search. We find that model size has a negative impact on epistemic diversity, while retrieval-augmented generation (RAG) has a positive impact, though the improvement from RAG varies by the cultural context. Finally, compared to a traditional knowledge source (Wikipedia), we find that country-specific claims reflect the English language more than the local one, highlighting a gap in epistemic representation.
翻译:大语言模型(LLMs)倾向于生成同质化文本,这可能影响不同输出间生成知识的多样性。鉴于其可能取代现有的知识获取方式,这带来了知识坍缩的风险,即同质化的LLMs可能导致大多数人接触到的信息高度趋同,从而随着未被充分表征的知识被遗忘,使得可获取信息的范围随时间逐渐缩小。为评估LLMs带来的知识坍缩风险,我们提出了一种衡量认知多样性(即LLM输出中关于现实世界主张的差异性)的新方法。基于此方法,我们开展了一项广泛的实证研究,测试了27个LLMs、涵盖12个国家的155个主题,以及来自真实用户对话的200个提示模板。针对研究中的主题,我们发现尽管新模型倾向于生成更多样化的主张,但所有模型的认知多样性均低于基础网络搜索。研究表明,模型规模对认知多样性具有负面影响,而检索增强生成(RAG)则具有积极影响,但RAG带来的改善程度因文化背景而异。最后,与传统知识源(维基百科)相比,我们发现特定国家相关的主张更多反映英语语境而非当地语言,这凸显了认知表征层面的差距。