The evolution of artificial intelligence (AI) has rendered the boundary between humanity and computational machinery increasingly ambiguous. In the presence of more interwoven relationships within human-machine symbiosis, the very notion of AI-generated information becomes difficult to define, as such information arises not from either humans or machines in isolation, but from their mutual shaping. Therefore, a more pertinent question lies not merely in whether AI has participated, but in how it has participated. In general, the role assumed by AI is often specified, either implicitly or explicitly, in the input prompt, yet becomes less apparent or altogether unobservable when the generated content alone is available. Once detached from the dialogue context, the functional role may no longer be traceable. This study considers the problem of tracing the functional role played by AI in natural language generation. A methodology is proposed to infer the latent role specified by the prompt, embed this role into the content during the probabilistic generation process and subsequently recover the nature of AI participation from the resulting text. Experimentation is conducted under a representative scenario in which AI acts either as an assistive agent that edits human-written content or as a creative agent that generates new content from a brief concept. The experimental results support the validity of the proposed methodology in terms of discrimination between roles, robustness against perturbations and preservation of linguistic quality. We envision that this study may contribute to future research on the ethics of AI with regard to whether AI has been used fairly, transparently and appropriately.
翻译:人工智能的演进已使人类与计算系统之间的边界日益模糊。在人机共生关系愈发交织的背景下,人工智能生成信息的概念本身变得难以界定,因为此类信息并非源自人类或机器的孤立作用,而是源于两者的相互塑造。因此,一个更切题的问题不仅在于人工智能是否参与,而在于它如何参与。通常,人工智能所承担的角色会在输入提示(prompt)中或隐或显地明确指定,但当仅有生成内容可供分析时,该角色便变得模糊不清乃至完全不可观测。一旦脱离对话语境,其功能角色便可能无法追溯。本研究探讨了在自然语言生成中追溯人工智能扮演的功能角色问题。我们提出了一种方法论,用以推断提示中隐含的角色设定,在概率生成过程中将该角色嵌入内容,并随后从生成的文本中恢复人工智能参与的本质。实验在代表性场景下开展:人工智能或作为编辑人类撰写内容的辅助型代理,或作为根据简短概念生成新内容的创意型代理。实验结果支持了所提方法论在角色区分性、对扰动的鲁棒性以及语言质量保持方面的有效性。我们期望,本研究可为未来关于人工智能伦理的研究——即人工智能是否被公平、透明且恰当地使用——作出贡献。