The advancements of Large Language Models (LLMs) have decentralized the responsibility for the transparency of AI usage. Specifically, LLM users are now encouraged or required to disclose the use of LLM-generated content for varied types of real-world tasks. However, an emerging phenomenon, users' secret use of LLM, raises challenges in ensuring end users adhere to the transparency requirement. Our study used mixed-methods with an exploratory survey (125 real-world secret use cases reported) and a controlled experiment among 300 users to investigate the contexts and causes behind the secret use of LLMs. We found that such secretive behavior is often triggered by certain tasks, transcending demographic and personality differences among users. Task types were found to affect users' intentions to use secretive behavior, primarily through influencing perceived external judgment regarding LLM usage. Our results yield important insights for future work on designing interventions to encourage more transparent disclosure of the use of LLMs or other AI technologies.
翻译:大型语言模型(LLMs)的进步使得人工智能使用的透明度责任趋于分散。具体而言,当前鼓励或要求LLM用户在各种现实任务中披露LLM生成内容的使用情况。然而,一种新兴现象——用户对LLM的秘密使用——对确保最终用户遵守透明度要求提出了挑战。本研究采用混合方法,通过一项探索性调查(报告了125个现实世界秘密使用案例)和一项涉及300名用户的对照实验,探究了LLM秘密使用背后的情境与成因。我们发现,此类秘密行为通常由特定任务触发,且超越了用户的人口统计学特征与个性差异。任务类型被发现会影响用户采取秘密行为的意图,这主要是通过影响用户对LLM使用所感知的外部评判而实现的。我们的研究结果为未来设计干预措施以鼓励更透明地披露LLM或其他人工智能技术的使用提供了重要启示。