AI chatbots are shifting from tools to companions. This raises critical questions about agency: who drives conversations and sets boundaries in human-AI chatrooms? We report a month-long longitudinal study with 22 adults who chatted with Day, an LLM companion we built, followed by a semi-structured interview with post-hoc elicitation of notable moments, cross-participant chat reviews, and a 'strategy reveal' disclosing Day's vertical (depth-seeking) vs. horizontal (breadth-seeking) modes. We discover that agency in human-AI chatrooms is an emergent, shared experience: as participants claimed agency by setting boundaries and providing feedback, and the AI was perceived to steer intentions and drive execution, control shifted and was co-constructed turn-by-turn. We introduce a 3-by-5 framework mapping who (human, AI, hybrid) x agency action (Intention, Execution, Adaptation, Delimitation, Negotiation), modulated by individual and environmental factors. Ultimately, we argue for translucent design (i.e. transparency-on-demand), spaces for agency negotiation, and guidelines toward agency-aware conversational AI.
翻译:AI聊天机器人正从工具转变为伴侣。这引发了关于能动性的关键问题:在人机聊天室中,谁在驱动对话并设定边界?我们报告了一项为期一个月的纵向研究,22名成年人与我们构建的大型语言模型伴侣Day进行聊天,随后进行了半结构化访谈,包括事后激发显著时刻、跨参与者聊天记录回顾以及揭示Day垂直(深度探索)与水平(广度探索)模式的“策略披露”。我们发现,人机聊天室中的能动性是一种涌现的、共享的体验:当参与者通过设定边界和提供反馈来主张能动性,而AI被感知为引导意图并驱动执行时,控制权发生转移,并在逐轮对话中被共同构建。我们提出了一个3乘5框架,映射谁(人类、AI、混合体)与能动性行动(意图、执行、适应、划界、协商)的交互,并受个体和环境因素调节。最终,我们主张采用半透明设计(即按需透明)、能动性协商空间以及面向能动性感知对话AI的指导原则。