Anthropomorphisation -- the phenomenon whereby non-human entities are ascribed human-like qualities -- has become increasingly salient with the rise of large language model (LLM)-based conversational agents (CAs). Unlike earlier chatbots, LLM-based CAs routinely generate interactional and linguistic cues, such as first-person self-reference, epistemic and affective expressions that empirical work shows can increase engagement. On the other hand, anthropomorphisation raises ethical concerns, including deception, overreliance, and exploitative relationship framing, while some authors argue that anthropomorphic interaction may support autonomy, well-being, and inclusion. Despite increasing interest in the phenomenon, literature remains fragmented across domains and varies substantially in how it defines, operationalizes, and normatively evaluates anthropomorphisation. This scoping review maps ethically oriented work on anthropomorphising LLM-based CAs across five databases and three preprint repositories. We synthesize (1) conceptual foundations, (2) ethical challenges and opportunities, and (3) methodological approaches. We find convergence on attribution-based definitions but substantial divergence in operationalization, a predominantly risk-forward normative framing, and limited empirical work that links observed interaction effects to actionable governance guidance. We conclude with a research agenda and design/governance recommendations for ethically deploying anthropomorphic cues in LLM-based conversational agents.
翻译:拟人化——即赋予非人类实体以类人特质的现象——随着基于大型语言模型(LLM)的对话代理(CAs)的兴起而日益凸显。与早期的聊天机器人不同,基于LLM的对话代理通常会生成交互性和语言性线索,例如第一人称自我指涉、认知性和情感性表达,实证研究表明这些线索能够提升用户参与度。另一方面,拟人化也引发了伦理担忧,包括欺骗、过度依赖以及剥削性关系框架的建立,而部分学者则认为拟人化交互可能有助于支持自主性、福祉和包容性。尽管对该现象的关注日益增长,相关文献仍分散在不同领域,并且在如何定义、操作化和规范性评估拟人化方面存在显著差异。本范围综述通过检索五个数据库和三个预印本库,对关注基于LLM的对话代理拟人化的伦理导向研究进行了系统性梳理。我们综合了(1)概念基础,(2)伦理挑战与机遇,以及(3)方法论路径。研究发现,在基于属性归因的定义上存在共识,但在操作化方面存在显著分歧;规范性框架主要侧重于风险导向;将观察到的交互效应与可操作的治理指导相联系的实证研究有限。最后,我们提出了一个研究议程,并为在基于LLM的对话代理中合乎伦理地部署拟人化线索提供了设计与治理建议。