Recent studies have discussed how users are increasingly using conversational AI systems, powered by LLMs, for information seeking, decision support, and even emotional support. However, these macro-level observations offer limited insight into how the purpose of these interactions shifts over time, how users frame their interactions with the system, and how steering dynamics unfold in these human-AI interactions. To examine these evolving dynamics, we gathered and analyzed a unique dataset InVivoGPT: consisting of 825K ChatGPT interactions, donated by 300 users through their GDPR data rights. Our analyses reveal three key findings. First, participants increasingly turn to ChatGPT for a broader range of purposes, including substantial growth in sensitive domains such as health and mental health. Second, interactions become more socially framed: the system anthropomorphizes itself at rising rates, participants more frequently treat it as a companion, and personal data disclosure becomes both more common and more diverse. Third, conversational steering becomes more prominent, especially after the release of GPT-4o, with conversations where the participants followed a model-initiated suggestion quadrupling over the period of our dataset. Overall, our results show that conversational AI systems are shifting from functional tools to social partners, raising important questions about their design and governance.
翻译:近期研究探讨了用户如何越来越多地利用基于大语言模型的对话式AI系统进行信息检索、决策支持甚至情感支持。然而,这些宏观层面的观察对以下问题的揭示有限:这些交互目的如何随时间演变、用户如何构建与系统的交互框架、以及人机交互中的引导动态如何展开。为探究这些演进动态,我们收集并分析了独特数据集InVivoGPT:该数据集包含82.5万次ChatGPT交互记录,由300名用户通过其GDPR数据权利捐赠获得。我们的分析揭示了三个关键发现:首先,参与者越来越多地将ChatGPT用于更广泛的目的,其中健康与心理健康等敏感领域的应用显著增长;其次,交互行为日益呈现社会化特征:系统自我拟人化频率持续上升,参与者更频繁地将其视为伙伴,个人数据披露行为变得更为普遍且类型更加多样;第三,对话引导机制日益凸显,特别是在GPT-4o发布后,遵循模型主动建议的对话数量在我们数据集时间范围内增长至四倍。总体而言,我们的研究结果表明对话式AI系统正从功能性工具转变为社交伙伴,这对其设计与治理提出了重要议题。