With the growing popularity of conversational agents based on large language models (LLMs), we need to ensure their behaviour is ethical and appropriate. Work in this area largely centres around the 'HHH' criteria: making outputs more helpful and honest, and avoiding harmful (biased, toxic, or inaccurate) statements. Whilst this semantic focus is useful when viewing LLM agents as mere mediums or output-generating systems, it fails to account for pragmatic factors that can make the same speech act seem more or less tactless or inconsiderate in different social situations. With the push towards agentic AI, wherein systems become increasingly proactive in chasing goals and performing actions in the world, considering the pragmatics of interaction becomes essential. We propose an interactional approach to ethics that is centred on relational and situational factors. We explore what it means for a system, as a social actor, to treat an individual respectfully in a (series of) interaction(s). Our work anticipates a set of largely unexplored risks at the level of situated social interaction, and offers practical suggestions to help agentic LLM technologies treat people well.
翻译:随着基于大语言模型的对话代理日益普及,我们必须确保其行为符合伦理规范且适当。该领域现有研究主要围绕"HHH"标准展开:使输出更有帮助、更诚实,并避免有害(如偏见、毒性或不准确)言论。虽然这种语义聚焦在将大语言模型代理视为单纯媒介或输出生成系统时具有实用性,却未能考虑语用因素——这些因素可能使同一言语行为在不同社交情境中显得更得体或更冒失。随着人工智能向代理型发展,系统在追求目标、执行现实世界行动时日益自主化,考量交互语用学变得至关重要。我们提出一种以关系性和情境性因素为核心的人际互动伦理方法,探讨作为社会行动者的系统如何在一系列互动中尊重个体。本研究预见了情境化社会互动层面上一系列尚待探索的风险,并为代理型大语言模型技术提供实用建议,以促使其妥善对待人类。