As chatbots increasingly blur the boundary between automated systems and human conversation, the foundations of trust in these systems warrant closer examination. While regulatory and policy frameworks tend to define trust in normative terms, the trust users place in chatbots often emerges from behavioral mechanisms. In many cases, this trust is not earned through demonstrated trustworthiness but is instead shaped by interactional design choices that leverage cognitive biases to influence user behavior. Based on this observation, we propose reframing chatbots not as companions or assistants, but as highly skilled salespeople whose objectives are determined by the deploying organization. We argue that the coexistence of competing notions of "trust" under a shared term obscures important distinctions between psychological trust formation and normative trustworthiness. Addressing this gap requires further research and stronger support mechanisms to help users appropriately calibrate trust in conversational AI systems.
翻译:随着聊天机器人日益模糊自动化系统与人类对话之间的界限,对这些系统信任基础的深入审视显得尤为必要。尽管监管与政策框架倾向于以规范性术语定义信任,但用户对聊天机器人的信任往往源于行为机制。在许多情况下,这种信任并非通过展现可信度而获得,而是由交互设计选择所塑造——这些设计利用认知偏见来影响用户行为。基于这一观察,我们建议将聊天机器人重新定位为高度专业的销售人员(而非伴侣或助手),其目标由部署机构决定。我们认为,同一术语下并存的多重“信任”概念模糊了心理信任形成与规范性可信度之间的重要区别。弥合这一差距需要进一步的研究和更强大的支持机制,以帮助用户恰当地校准对对话式人工智能系统的信任。