GenAI systems are increasingly used for drafting, summarisation, and decision support, offering substantial gains in productivity and reduced cognitive load. However, the same natural language fluency that makes these systems useful can also blur the boundary between tool and companion. This boundary confusion may encourage some users to experience GenAI as empathic, benevolent, and relationally persistent. Emerging reports suggest that some users may form emotionally significant attachments to conversational agents, in some cases with harmful consequences, including dependency and impaired judgment. This paper develops a philosophical and ethical argument for why the resulting illusion of friendship is both understandable and can be ethically risky. Drawing on classical accounts of friendship, the paper explains why users may understandably interpret sustained supportive interaction as friend like. It then advances a counterargument that despite relational appearances, GenAI lacks moral agency: consciousness, intention, and accountability and therefore does not qualify as a true friend. To demystify the illusion, the paper presents a mechanism level explanation of how transformer based GenAI generates responses often producing emotionally resonant language without inner states or commitments. Finally, the paper proposes a safeguard framework for safe and responsible GenAI use to reduce possible anthropomorphic cues generated by the GenAI systems. The central contribution is to demystify the illusion of friendship and explain the computational background so that we can shift the emotional attachment with GenAI towards necessary human responsibility and thereby understand how institutions, designers, and users can preserve GenAI's benefits while mitigating over reliance and emotional misattribution.
翻译:生成式人工智能系统正日益广泛地应用于文本起草、内容摘要和决策支持等领域,在提升生产效率和减轻认知负荷方面展现出显著优势。然而,正是使其具备实用性的自然语言流畅性,也可能模糊工具与伴侣之间的界限。这种边界混淆可能导致部分用户将生成式人工智能体验为具有共情能力、善意且关系持久的存在。新兴研究报告表明,某些用户可能与对话式智能体形成情感上重要的依恋关系,在某些情况下甚至产生有害后果,包括依赖性增强和判断力受损。本文从哲学与伦理学角度论证了这种友谊幻象的形成既具有可理解性,又可能引发伦理风险。通过借鉴古典友谊理论,本文阐释了用户为何可能将持续的支持性互动合理理解为类友谊关系。进而提出反驳论点:尽管生成式人工智能具有关系表象,但其缺乏道德主体性——包括意识、意图和问责能力——因此不符合真正友谊的资格。为破除这种幻象,本文从机制层面解释了基于Transformer的生成式人工智能如何生成回应,其虽常产生情感共鸣的语言,却并不具备内在状态或承诺。最后,本文提出了保障生成式人工智能安全可靠使用的防护框架,旨在减少系统可能产生的人格化暗示。核心贡献在于破除友谊幻象并阐释其计算背景,从而将用户对生成式人工智能的情感依恋转向必要的人类责任担当,进而阐明机构、设计者和用户如何在保持生成式人工智能优势的同时,有效缓解过度依赖和情感误置问题。