Knowledge claims are abundant in the literature on large language models (LLMs); but can we say that GPT-4 truly "knows" the Earth is round? To address this question, we review standard definitions of knowledge in epistemology and we formalize interpretations applicable to LLMs. In doing so, we identify inconsistencies and gaps in how current NLP research conceptualizes knowledge with respect to epistemological frameworks. Additionally, we conduct a survey of 100 professional philosophers and computer scientists to compare their preferences in knowledge definitions and their views on whether LLMs can really be said to know. Finally, we suggest evaluation protocols for testing knowledge in accordance to the most relevant definitions.
翻译:在大语言模型(LLM)的相关文献中,知识主张层出不穷;但我们能否断言GPT-4真正“知道”地球是圆的?为探讨这一问题,我们回顾了认识论中关于知识的经典定义,并形式化地阐释了适用于大语言模型的解读。在此过程中,我们揭示了当前自然语言处理研究在结合认识论框架概念化知识时存在的不一致性与空白。此外,我们通过对100位专业哲学家和计算机科学家的问卷调查,比较了他们对知识定义的偏好以及关于大语言模型是否真正具备知识能力的观点。最后,我们依据最相关的定义提出了检验知识能力的评估方案。