Self-adaptive systems increasingly operate in close interaction with humans, often sharing the same physical or virtual environments and making decisions with ethical implications at runtime. Current approaches typically encode ethics as fixed, rule-based constraints or as a single chosen ethical theory embedded at design time. This overlooks a fundamental property of human-system interaction settings: ethical preferences vary across individuals and groups, evolve with context, and may conflict, while still needing to remain within a legally and regulatorily defined hard-ethics envelope (e.g., safety and compliance constraints). This paper advocates a shift from static ethical rules to runtime ethical reasoning for self-adaptive systems, where ethical preferences are treated as runtime requirements that must be elicited, represented, and continuously revised as stakeholders and situations change. We argue that satisfying such requirements demands explicit ethics-based negotiation to manage ethical trade-offs among multiple humans who interact with, are represented by, or are affected by a system. We identify key challenges, ethical uncertainty, conflicts among ethical values (including human, societal, and environmental drivers), and multi-dimensional/multi-party/multi-driver negotiation, and outline research directions and questions toward ethically self-adaptive systems.
翻译:自适应系统日益与人类密切交互,通常共享相同的物理或虚拟环境,并在运行时做出具有伦理影响的决策。当前方法通常将伦理编码为固定的、基于规则的约束,或在设计时嵌入单一选定的伦理理论。这忽视了人机交互环境的一个基本特性:伦理偏好因个体和群体而异,随情境演变,且可能相互冲突,同时仍需保持在法律和监管定义的硬伦理边界内(例如安全与合规约束)。本文主张将自适应系统的伦理处理从静态规则转向运行时伦理推理,其中伦理偏好被视为运行时需求,必须随着利益相关者和情境的变化而不断获取、表示和修订。我们认为,满足此类需求需要基于伦理的显式协商,以管理与系统交互、被系统代表或受系统影响的多个主体之间的伦理权衡。我们识别出关键挑战:伦理不确定性、伦理价值(包括人类、社会和环境驱动因素)间的冲突,以及多维/多方/多驱动因素的协商,并概述了实现伦理自适应系统的研究方向与问题。