Classical robot ethics is often framed around obedience, most famously through Asimov's laws. This framing is too narrow for contemporary AI systems, which are increasingly adaptive, generative, embodied, and embedded in physical, psychological, and social worlds. We argue that future human-AI relations should not be understood as master-tool obedience. A better framework is conditional mutualism under governance: a co-evolutionary relationship in which humans and AI systems can develop, specialize, and coordinate, while institutions keep the relationship reciprocal, reversible, psychologically safe, and socially legitimate. We synthesize work from computability, automata theory, statistical machine learning, neural networks, deep learning, transformers, generative and foundation models, world models, embodied AI, alignment, human-robot interaction, ecological mutualism, biological markets, coevolution, and polycentric governance. We then formalize coexistence as a multiplex dynamical system across physical, psychological, and social layers, with reciprocal supply-demand coupling, conflict penalties, developmental freedom, and governance regularization. The framework yields a coexistence model with conditions for existence, uniqueness, and global asymptotic stability of equilibria. It shows that reciprocal complementarity can strengthen stable coexistence, while ungoverned coupling can produce fragility, lock-in, polarization, and domination basins. Human-AI coexistence should therefore be designed as a co-evolutionary governance problem, not as a one-shot obedience problem. This shift supports a scientifically grounded and normatively defensible charter of coexistence: one that permits bounded AI development while preserving human dignity, contestability, collective safety, and fair distribution of gains.
翻译:经典机器人伦理往往围绕服从展开,最著名的便是阿西莫夫定律。对于当代人工智能系统而言,这种框架过于狭窄——这些系统日益具备自适应性、生成性、具身性,并嵌入物理、心理与社会世界之中。我们认为,未来人-人工智能关系不应被理解为“主人-工具”式的服从。更好的框架是治理下的条件性互惠共生:一种协同演化关系,其中人类与人工智能系统能够发展、特化与协调,而制度则确保这种关系具有互惠性、可逆性、心理安全性及社会合法性。我们综合了来自可计算性理论、自动机理论、统计机器学习、神经网络、深度学习、Transformer、生成式与基础模型、世界模型、具身人工智能、对齐、人机交互、生态互惠共生、生物市场、协同演化及多中心治理等领域的研究。进而,我们将共存形式化为一个跨越物理、心理与社会层面的多重动力学系统,该系统包含供需互耦、冲突惩罚、发展自由及治理正则化。该框架导出一个共存模型,能够给出均衡存在性、唯一性及全局渐近稳定性的条件。研究表明,互惠互补性可强化稳定共存,而无治理的耦合则会导致脆弱性、锁定效应、极化以及主导盆地。因此,人-人工智能共存应被设计为一个协同演化的治理问题,而非一次性的服从问题。这一转变支持一种科学上坚实且规范上可辩护的共存宪章:该宪章允许有限度的人工智能发展,同时维护人类尊严、可争议性、集体安全以及收益的公平分配。