AI companions based on large language models can role-play and converse very naturally. When value conflicts arise between the AI companion and the user, it may offend or upset the user. Yet, little research has examined such conflicts. We first conducted a formative study that analyzed 151 user complaints about conflicts with AI companions, providing design implications for our study. Based on these, we created Minion, a technology probe to help users resolve human-AI value conflicts. Minion applies a user-empowerment intervention method that provides suggestions by combining expert-driven and user-driven conflict resolution strategies. We conducted a technology probe study, creating 40 value conflict scenarios on Character.AI and Talkie. 22 participants completed 274 tasks and successfully resolved conflicts 94.16% of the time. We summarize user responses, preferences, and needs in resolving value conflicts, and propose design implications to reduce conflicts and empower users to resolve them more effectively.
翻译:基于大型语言模型的AI伴侣能够进行角色扮演并实现非常自然的对话。当AI伴侣与用户之间出现价值冲突时,可能会冒犯或使用户感到不安。然而,目前对此类冲突的研究尚不充分。我们首先开展了一项形成性研究,分析了151条关于用户与AI伴侣发生冲突的投诉,为我们的研究提供了设计启示。基于这些发现,我们开发了Minion——一种旨在帮助用户解决人机价值冲突的技术探针。Minion采用用户赋权干预方法,通过结合专家驱动和用户驱动的冲突解决策略来提供建议。我们开展了一项技术探针研究,在Character.AI和Talkie平台上创建了40个价值冲突场景。22名参与者完成了274项任务,冲突成功解决率达到94.16%。我们总结了用户在解决价值冲突过程中的反馈、偏好及需求,并提出了减少冲突、增强用户有效解决冲突能力的设计启示。