Large language models (LLMs) are increasingly used as collaborative partners in writing. However, this raises a critical challenge of authorship, as users and models jointly shape text across interaction turns. Understanding authorship in this context requires examining users' evolving internal states during collaboration, particularly self-efficacy and trust. Yet, the dynamics of these states and their associations with users' prompting strategies and authorship outcomes remain underexplored. We examined these dynamics through a study of 302 participants in LLM-assisted writing, capturing interaction logs and turn-by-turn self-efficacy and trust ratings. Our analysis showed that collaboration generally decreased users' self-efficacy while increasing trust. Participants who lost self-efficacy were more likely to ask the LLM to edit their work directly, whereas those who recovered self-efficacy requested more review and feedback. Furthermore, participants with stable self-efficacy showed higher actual and perceived authorship of the final text. Based on these findings, we propose design implications for understanding and supporting authorship in human-LLM collaboration.
翻译:大型语言模型(LLM)正日益成为写作过程中的协作伙伴。然而,这引发了关于作者身份的关键挑战,因为用户与模型在交互轮次中共同塑造文本。理解此种情境下的作者身份需要考察用户在协作过程中不断演化的内在状态,尤其是自我效能感与信任。然而,这些状态的动态变化及其与用户提示策略、作者身份产出之间的关联仍未得到充分探索。我们通过一项涉及302名参与者的LLM辅助写作研究考察了这些动态,采集了交互日志以及逐轮次的自我效能感与信任度评分。分析表明,协作过程普遍降低了用户的自我效能感,同时提升了信任度。自我效能感降低的参与者更倾向于直接要求LLM编辑其文稿,而自我效能感恢复的参与者则更多请求审阅与反馈。此外,自我效能感稳定的参与者在最终文本中表现出更高的实际作者贡献度与感知作者所有权。基于这些发现,我们提出了理解与支持人-LLM协作中作者身份的设计启示。