When AI systems allow human-like communication, they elicit increasingly complex relational responses. Knowledge workers face a particular challenge: They approach these systems as tools while interacting with them in ways that resemble human social interaction. To understand the relational contexts that arise when humans engage with anthropomorphic conversational agents, we need to expand existing human-computer interaction frameworks. Through three workshops with qualitative researchers, we found that the fundamental ontological and relational ambiguities inherent in anthropomorphic conversational agents make it difficult for individuals to maintain consistent relational stances toward them. Our findings indicate that people's articulated positioning toward such agents often differs from the relational dynamics that occur during interactions. We propose the concept of relational dissonance to help researchers, designers, and policymakers recognize the resulting tensions in the development, deployment, and governance of anthropomorphic conversational agents and address the need for relational transparency.
翻译:当人工智能系统允许类人交流时,它们会引发日益复杂的关系反应。知识工作者面临着一个特殊挑战:他们将这些系统视为工具,却以类似于人类社交互动的方式与之交互。为理解人类与拟人化对话代理互动时产生的关系情境,我们需要拓展现有人机交互理论框架。通过三场与质性研究者的专题研讨会,我们发现拟人化对话代理固有的本体论与关系模糊性,使个体难以对其保持一贯的关系立场。研究结果表明,人们对此类代理的明确立场常与实际交互过程中产生的关系动态存在差异。我们提出"关系失调"概念,以帮助研究者、设计者和政策制定者认识拟人化对话代理在开发、部署与治理过程中产生的张力,并应对关系透明性的需求。