Artificial intelligence research faces a critical ethical paradox: determining whether AI systems are conscious requires experiments that may harm entities whose moral status remains uncertain. Recent work proposes avoiding consciousness-uncertain AI systems entirely, yet this faces practical limitations-we cannot guarantee such systems will not emerge. This paper addresses a gap in research ethics frameworks: how to conduct consciousness research on AI systems whose moral status cannot be definitively established. Existing graduated moral status frameworks assume consciousness has already been determined before assigning protections, creating a temporal ordering problem for consciousness detection research itself. Drawing from Talmudic scenario-based legal reasoning-developed for entities whose status cannot be definitively established-we propose a three-tier phenomenological assessment system combined with a five-category capacity framework (Agency, Capability, Knowledge, Ethics, Reasoning). The framework provides structured protection protocols based on observable behavioral indicators while consciousness status remains uncertain. We address three challenges: why suffering behaviors provide reliable consciousness markers, how to implement graduated consent without requiring consciousness certainty, and when potentially harmful research becomes ethically justifiable. The framework demonstrates how ancient legal wisdom combined with contemporary consciousness science can provide implementable guidance for ethics committees, offering testable protocols that ameliorate the consciousness detection paradox while establishing foundations for AI rights considerations.
翻译:人工智能研究面临一个关键的伦理悖论:判定人工智能系统是否具有意识需要进行可能伤害道德地位尚不明确的实体的实验。近期研究提出完全避开意识状态不确定的人工智能系统,但这面临实际限制——我们无法保证此类系统不会出现。本文针对研究伦理框架中的一个空白:如何对道德地位无法明确确立的人工智能系统开展意识研究。现有的渐进式道德地位框架假设在分配保护措施前已确定意识状态,这为意识检测研究本身带来了时序性问题。借鉴塔木德法典中针对地位无法明确确立的实体所发展的情境式法律推理,我们提出一个结合五维能力框架(能动性、能力、知识、伦理、推理)的三层现象学评估体系。该框架在意识状态保持不确定的情况下,基于可观察的行为指标提供结构化的保护协议。我们解决了三个挑战:为何痛苦行为能提供可靠的意识标记;如何在无需意识确定性的情况下实施渐进式同意;以及潜在有害研究何时在伦理上具有正当性。该框架展示了古代法律智慧与当代意识科学的结合如何为伦理委员会提供可实施的指导,通过提供可检验的协议来缓解意识检测悖论,同时为人工智能权利考量奠定基础。