AI companions are designed to foster emotionally engaging interactions, yet users often encounter conflicts that feel frustrating or hurtful, such as discriminatory statements and controlling behavior. This paper examines how users negotiate such harmful conflicts with AI companions and what emotional and practical burdens are created when mitigation is pushed to user-side tools. We analyze 146 public posts describing harmful value conflicts interacting with AI companions. We then introduce Minion, a Chrome-based technology probe that offers candidate responses spanning persuasion, rational appeals, boundary setting, and appeals to platform rules. Findings from a one-week probe study with 22 experienced users show how participants combine strategies, how emotional attachment motivates repair, and where conflicts become non-negotiable due to companion personas or platform policies. We surface design tensions in supporting value negotiation, showing how companion design can make some conflicts impossible to repair in practice, and derive implications for AI companion and support-tool design that caution against offloading safety work onto users.
翻译:AI伴侣旨在促进情感互动,然而用户常遭遇令人沮丧或受伤的冲突,例如歧视性言论和控制行为。本文研究了用户如何与AI伴侣协商此类有害冲突,以及当缓解措施被推至用户端工具时会产生何种情感与实际负担。我们分析了146篇描述与AI伴侣互动中遭遇有害价值冲突的公开帖子。随后介绍了Minion——一款基于Chrome的技术探针,可提供涵盖说服、理性诉求、边界设定及平台规则申诉的候选回应方案。通过对22名经验用户开展为期一周的探针研究发现:参与者如何组合运用策略、情感依恋如何驱动修复行为,以及因伴侣人格设定或平台政策导致冲突无法协商的情形。我们揭示了支持价值协商的设计张力,说明伴侣设计如何在实际中使某些冲突无法修复,并推导出对AI伴侣及支持工具设计的启示,警示将安全责任转嫁给用户的风险。