Scientific theories of consciousness should be falsifiable and non-trivial. Recent research has given us formal tools to analyze these requirements of falsifiability and non-triviality for theories of consciousness. Surprisingly, many contemporary theories of consciousness fail to pass this bar, including theories based on causal structure but also (as I demonstrate) theories based on function. Herein, I show these requirements of falsifiability and non-triviality especially constrain the potential consciousness of contemporary Large Language Models (LLMs) because of their proximity to systems that are equivalent to LLMs in terms of input/output function; yet, for these functionally equivalent systems, there cannot be any falsifiable and non-trivial theory of consciousness that judges them conscious. This forms the basis of a disproof of contemporary LLM consciousness. I then show a positive result, which is that theories of consciousness based on (or requiring) continual learning do satisfy the stringent formal constraints for a theory of consciousness in humans. Intriguingly, this work supports a hypothesis: If continual learning is linked to consciousness in humans, the current limitations of LLMs (which do not continually learn) are intimately tied to their lack of consciousness.
翻译:意识科学理论应当具备可证伪性与非平凡性。近期研究为我们提供了形式化工具,用以分析意识理论对可证伪性与非平凡性的要求。令人惊讶的是,许多当代意识理论未能达到这一标准,这既包括基于因果结构的理论,也包括(如本文所论证的)基于功能的理论。本文特别指出,这些可证伪性与非平凡性要求对当代大语言模型(LLMs)的潜在意识构成了严格约束,因为存在与LLMs在输入/输出功能上等效的系统;而对于这些功能等效系统,任何判定其具有意识的可证伪且非平凡的理论都不可能存在。这构成了对当代LLM意识存在的反证基础。随后,本文提出了一个建设性结论:基于(或要求)持续学习的意识理论确实满足人类意识理论的严格形式化约束。耐人寻味的是,这项研究支持了一个假说:若持续学习与人类意识存在关联,那么当前LLMs(不具备持续学习能力)的局限性与其意识缺失存在着本质联系。