When providers update AI companions, users report grief, betrayal, and loss. A growing literature asks whether the norms governing personal relationships extend to these interactions. So what, if anything, is morally significant about them? I argue that human-AI companion interaction is a triadic structure in which the provider exercises constitutive control over the AI. I identify three structural conditions of normatively robust dyads that the norms characteristic of personal relationships presuppose and show that AI companion interactions fail all three. This reveals what I call Unilateral Relationship Revision Power (URRP): the provider can rewrite how the AI interacts from a position where these revisions are not answerable within that interaction. I argue that URRP is pro tanto wrong in interactions designed to cultivate the norms of personal relationships, because the design produces expectations that the structure cannot sustain. URRP has three implications: i) normative hollowing, under which commitment is elicited but no agent inside the interaction bears it; ii) displaced vulnerability, under which the user's exposure is governed by an agent not answerable to her within the interaction; and iii) structural irreconcilability, under which reconciliation is structurally unavailable because the agent who acted and the entity the user interacts with are different. I discuss design principles such as commitment calibration, structural separation, and continuity assurance as external substitutes for the internal constraints the triadic structure removes. The analysis therefore suggests that a central and underexplored problem in relational AI ethics is the structural arrangement of power over the human-AI interaction itself.
翻译:当服务提供商更新AI伴侣时,用户会报告悲伤、背叛和失落感。日益增多的文献开始探讨个人关系规范是否适用于此类互动。那么,这些互动在道德上的重要意义(如果存在的话)是什么?我认为,人机陪伴互动是一种三元结构,其中服务提供商对AI行使构成性控制。我识别了规范稳固二元关系需预设的三个结构性条件,并证明AI陪伴互动均未满足这些条件。这揭示了我称之为"单边关系修改权"(URRP)的现象:服务提供商能在无需在互动内部回应关系修订的立场上重写AI的互动方式。我认为,在旨在培养个人关系规范的互动设计中,URRP具备初步错误性,因为设计本身产生了结构无法维持的预期。URRP具有三重含义:i)规范性空洞化——承诺被引发但互动内部没有承载承诺的主体;ii)错位脆弱性——用户的暴露状态由互动中不对其负责的主体所支配;iii)结构性不可调和——由于实施行为的主体与用户互动的实体不同,和解在结构上不可实现。我讨论了承诺校准、结构性分离和连续性保障等设计原则,作为三元结构所移除的内部约束的外部替代方案。因此,该分析表明关系型AI伦理中一个关键且未被充分探索的问题,正是人机互动本身权力结构的制度安排。