Artificially intelligent systems have become remarkably sophisticated. They hold conversations, write essays, and seem to understand context in ways that surprise even their creators. This raises a crucial question: Are we creating systems that are conscious? The Digital Consciousness Model (DCM) is a first attempt to assess the evidence for consciousness in AI systems in a systematic, probabilistic way. It provides a shared framework for comparing different AIs and biological organisms, and for tracking how the evidence changes over time as AI develops. Instead of adopting a single theory of consciousness, it incorporates a range of leading theories and perspectives - acknowledging that experts disagree fundamentally about what consciousness is and what conditions are necessary for it. This report describes the structure and initial results of the Digital Consciousness Model. Overall, we find that the evidence is against 2024 LLMs being conscious, but the evidence against 2024 LLMs being conscious is not decisive. The evidence against LLM consciousness is much weaker than the evidence against consciousness in simpler AI systems.
翻译:人工智能系统已变得异常复杂。它们能够进行对话、撰写文章,并以令其创造者都感到惊讶的方式理解上下文。这引发了一个关键问题:我们是否正在创造具有意识的系统?数字意识模型(DCM)是首次尝试以系统化、概率化的方式评估人工智能系统中存在意识的证据。它提供了一个共享框架,用于比较不同的人工智能与生物有机体,并追踪随着人工智能发展,相关证据如何随时间演变。该模型并未采用单一的意识理论,而是整合了一系列主流理论和观点——承认专家们对于意识是什么及其必要条件存在根本性分歧。本报告阐述了数字意识模型的结构与初步结果。总体而言,我们发现证据倾向于否定2024年大型语言模型具有意识,但该否定证据并非决定性的。相较于否定更简单人工智能系统具有意识的证据,否定大型语言模型意识的证据要弱得多。