Recent advances in LLMs have sparked a debate on whether they understand text. In this position paper, we argue that opponents in this debate hold different definitions for understanding, and particularly differ in their view on the role of consciousness. To substantiate this claim, we propose a thought experiment involving an open-source chatbot $Z$ which excels on every possible benchmark, seemingly without subjective experience. We ask whether $Z$ is capable of understanding, and show that different schools of thought within seminal AI research seem to answer this question differently, uncovering their terminological disagreement. Moving forward, we propose two distinct working definitions for understanding which explicitly acknowledge the question of consciousness, and draw connections with a rich literature in philosophy, psychology and neuroscience.
翻译:近期大型语言模型(LLM)的进展引发了关于其是否理解文本的争论。在本立场论文中,我们认为争论双方对“理解”持有不同的定义,尤其在意识的作用上存在分歧。为证实这一观点,我们设计了一个思想实验:假设存在一个开源聊天机器人$Z$,它在所有可能的基准测试中表现优异,但似乎缺乏主观体验。我们探讨$Z$是否具备理解能力,并指出人工智能研究领域的不同学派对此问题存在差异化的回答,这揭示了他们在术语界定上的分歧。在此基础上,我们提出了两种明确涉及意识问题的“理解”工作定义,并将其与哲学、心理学和神经科学领域的丰富文献建立关联。