When providers update AI companions, users report grief, betrayal, and loss. A growing literature asks whether the norms governing personal relationships extend to these interactions. So what, if anything, is morally significant about them? I argue that this debate has missed a prior structural question: who controls the relationship, and from where? Human-AI companion interaction is a triadic structure in which the provider exercises constitutive control over the AI. I identify three structural conditions of normatively robust dyads that the norms characteristic of personal relationships presuppose and show that AI companion interactions fail all three. This reveals what I call Unilateral Relationship Revision Power (URRP): the provider can rewrite how the AI interacts from a position where these revisions are not answerable within that interaction. I argue that URRP is pro tanto wrong in interactions designed to cultivate the norms of personal relationships, because the design produces expectations that the structure cannot sustain. URRP has three implications: i) normative hollowing, under which the interaction elicits commitment but no agent inside it bears the resulting obligations; ii) displaced vulnerability, under which the user's emotional exposure is governed by an agent not answerable to her within the interaction; and iii) structural irreconcilability, under which the interaction cultivates norms of reconciliation but no agent inside it can acknowledge or answer for the revision. I propose design principles that partially substitute for the internal constraints the triadic structure removes. A central and underexplored problem in relational AI ethics is therefore the structural arrangement of power over the human-AI interaction itself.
翻译:当人工智能伴侣的提供者更新系统时,用户会报告感到悲伤、背叛和失落。越来越多的文献探讨规制人际关系的规范是否适用于这些互动。那么,这些规范在道德上究竟有何重要意义?我认为这一争论忽略了更根本的结构性问题:谁控制着这种关系,且控制从何而来?人机伴侣互动是一种三元结构,其中提供者对人工智能行使构成性控制。我识别出规范健全身心伴关系所预设的三个结构性条件,并证明人工智能伴侣互动在这三个条件上均不成立。这揭示了我所谓的“单边关系修正权力”(Unilateral Relationship Revision Power, URRP):提供者可以在修订行为无需在该互动内部承担问责的情况下,重写人工智能的互动方式。我认为,在旨在培育人际规范的互动设计中,URRP存在初始不当性,因为设计产生了结构无法维系的期望。URRP具有三重含义:i)规范性空心化,即互动引发承诺,但互动内部没有能动者承担由此产生的义务;ii)错位脆弱性,即用户的情感暴露受制于互动内部无法向其问责的能动者;iii)结构性不可调和性,即互动培育和解规范,但互动内部没有能动者能承认或回应修正行为。我提出部分替代三元结构所移除的内部约束的设计原则。因此,关系型人工智能伦理学中一个核心且未被充分探索的问题,正是对人机互动本身权力的结构性安排。