Large language models (LLMs) are increasingly used as conversational partners for learning, yet the interactional dynamics supporting users' learning and engagement are understudied. We analyze the linguistic and interactional features from both LLM and participant chats across 397 human-LLM conversations about socio-political issues to identify the mechanisms and conditions under which LLM explanations shape changes in political knowledge and confidence. Mediation analyses reveal that LLM explanatory richness partially supports confidence by fostering users' reflective insight, whereas its effect on knowledge gain operates entirely through users' cognitive engagement. Moderation analyses show that these effects are highly conditional and vary by political efficacy. Confidence gains depend on how high-efficacy users experience and resolve uncertainty. Knowledge gains depend on high-efficacy users' ability to leverage extended interaction, with longer conversations benefiting primarily reflective users. In summary, we find that learning from LLMs is an interactional achievement, not a uniform outcome of better explanations. The findings underscore the importance of aligning LLM explanatory behavior with users' engagement states to support effective learning in designing Human-AI interactive systems.
翻译:大型语言模型(LLM)正日益被用作学习对话伙伴,然而支持用户学习与参与度的交互动态机制尚未得到充分研究。我们分析了397场关于社会政治议题的人机对话中LLM与参与者聊天记录的言语特征及交互特征,以识别LLM解释影响政治知识与信心水平变化的作用机制与条件。中介分析表明,LLM解释的丰富性通过促进用户的反思性洞察而部分支撑其信心提升,而其知识增益效应则完全通过用户的认知参与度实现。调节分析显示这些效应具有高度条件性,并随政治效能感水平变化:信心增益取决于高效能感用户如何体验并化解不确定性;知识增益则依赖于高效能感用户利用延伸对话的能力——较长对话主要使具有反思特质的用户受益。总之,我们发现从LLM中学习是一种交互性成就,而非优质解释的单一结果。研究结果强调,在设计人机交互系统时,必须使LLM的解释行为与用户的参与状态相协调,方能支持有效学习。