There has recently been widespread discussion of whether large language models might be sentient. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future.
翻译:近期,关于大型语言模型是否可能具有感知能力的讨论广泛存在。我们是否应当认真对待这一观点?本文将系统性地剖析支持与反对该论点的主要论据。基于当前意识科学领域的主流假设,现有模型在实现意识方面面临显著障碍:例如,它们缺乏递归处理机制、全局工作空间以及统一的能动性。与此同时,这些障碍很可能在未来十年左右被克服。本文的结论是:尽管现有大型语言模型具有意识的可能性较低,但我们应当严肃对待其后续模型在不久的未来可能具备意识这一潜在前景。