Large language models (LLMs) are increasingly used as collaborative partners in writing. However, this raises a critical challenge of authorship, as users and models jointly shape text across interaction turns. Understanding authorship in this context requires examining users' evolving internal states during collaboration, particularly self-efficacy and trust. Yet, the dynamics of these states and their associations with users' prompting strategies and authorship outcomes remain underexplored. We examined these dynamics through a study of 302 participants in LLM-assisted writing, capturing interaction logs and turn-by-turn self-efficacy and trust ratings. Our analysis showed that collaboration generally decreased users' self-efficacy while increasing trust. Participants who lost self-efficacy were more likely to ask the LLM to edit their work directly, whereas those who recovered self-efficacy requested more review and feedback. Furthermore, participants with stable self-efficacy showed higher actual and perceived authorship of the final text. Based on these findings, we propose design implications for understanding and supporting authorship in human-LLM collaboration.
翻译:大型语言模型(LLM)正日益成为写作过程中的协作伙伴。然而,这引发了关于作者身份的关键挑战,因为用户与模型在交互过程中共同塑造文本。理解这一情境下的作者身份需要考察用户在协作过程中不断演化的内在状态,特别是自我效能感与信任。然而,这些状态的动态变化及其与用户提示策略和作者身份结果的关联仍未得到充分探索。我们通过对302名参与者进行LLM辅助写作研究,捕获交互日志以及逐轮次的自我效能感与信任评分,从而考察这些动态变化。分析表明,协作通常会降低用户的自我效能感,同时增强其信任。自我效能感降低的参与者更倾向于直接要求LLM编辑其文稿,而自我效能感恢复的参与者则更多地请求审阅与反馈。此外,自我效能感稳定的参与者在最终文本中表现出更高的实际作者身份与感知作者身份。基于这些发现,我们提出了理解与支持人-LLM协作中作者身份的设计启示。