The integration of Large Language Models (LLMs) in social robotics presents a unique set of ethical challenges and social impacts. This research is set out to identify ethical considerations that arise in the design and development of these two technologies in combination. Using LLMs for social robotics may provide benefits, such as enabling natural language open-domain dialogues. However, the intersection of these two technologies also gives rise to ethical concerns related to misinformation, non-verbal cues, emotional disruption, and biases. The robot's physical social embodiment adds complexity, as ethical hazards associated with LLM-based Social AI, such as hallucinations and misinformation, can be exacerbated due to the effects of physical embodiment on social perception and communication. To address these challenges, this study employs an empirical design justice-based methodology, focusing on identifying socio-technical ethical considerations through a qualitative co-design and interaction study. The purpose of the study is to identify ethical considerations relevant to the process of co-design of, and interaction with a humanoid social robot as the interface of a LLM, and to evaluate how a design justice methodology can be used in the context of designing LLMs-based social robotics. The findings reveal a mapping of ethical considerations arising in four conceptual dimensions: interaction, co-design, terms of service and relationship and evaluates how a design justice approach can be used empirically in the intersection of LLMs and social robotics.
翻译:大型语言模型(LLMs)与社交机器人的融合带来了一系列独特的伦理挑战与社会影响。本研究旨在识别这两种技术结合设计与开发过程中产生的伦理考量。将LLMs应用于社交机器人可能带来益处,例如实现自然语言的开放域对话。然而,这两种技术的交叉也引发了与错误信息、非语言线索、情感干扰及偏见相关的伦理问题。机器人的物理社会具身性增加了复杂性,因为基于LLM的社交人工智能所涉及的伦理风险(如幻觉与错误信息)可能因物理具身性对社会认知与沟通的影响而加剧。为应对这些挑战,本研究采用基于实证设计正义的方法论,通过定性协同设计与交互研究,聚焦于识别社会技术层面的伦理考量。本研究旨在识别与作为LLM接口的人形社交机器人的协同设计及交互过程相关的伦理考量,并评估设计正义方法论在基于LLM的社交机器人设计语境中的应用价值。研究结果揭示了在四个概念维度(交互、协同设计、服务条款与关系)中产生的伦理考量图谱,并评估了设计正义方法在LLMs与社交机器人交叉领域中实证应用的可能性。