AI chatbots designed as emotional companions blur the boundaries between interpersonal intimacy and institutional software, creating a complex, multi-dimensional privacy environment. Drawing on Communication Privacy Management theory and Masur's horizontal (user-AI) and vertical (user-platform) privacy framework, we conducted in-depth interviews with fifteen users of companion AI platforms such as Replika and Character.AI. Our findings reveal that users blend interpersonal habits with institutional awareness: while the non-judgmental, always-available nature of chatbots fosters emotional safety and encourages self-disclosure, users remain mindful of institutional risks and actively manage privacy through layered strategies and selective sharing. Despite this, many feel uncertain or powerless regarding platform-level data control. Anthropomorphic design further blurs privacy boundaries, sometimes leading to unintentional oversharing and privacy turbulence. These results extend privacy theory by highlighting the unique interplay of emotional and institutional privacy management in human-AI companionship.
翻译:作为情感伴侣设计的AI聊天机器人模糊了人际亲密关系与机构软件之间的界限,创造了一个复杂多维的隐私环境。基于传播隐私管理理论和Masur提出的横向(用户-AI)与纵向(用户-平台)隐私框架,我们对十五名使用Replika、Character.AI等伴侣AI平台的用户进行了深度访谈。研究发现,用户将人际交往习惯与机构意识相融合:聊天机器人非评判性、始终在线的特性虽然能营造情感安全感并促进自我表露,但用户仍对机构风险保持警惕,并通过分层策略和选择性分享积极管理隐私。尽管如此,许多用户在平台级数据控制方面仍感到不确定或无力。拟人化设计进一步模糊了隐私边界,有时会导致无意的过度分享和隐私动荡。这些结果通过揭示人机伴侣关系中情感隐私管理与机构隐私管理的独特互动,拓展了隐私理论。