Agreement Technologies refer to open computer systems in which autonomous software agents interact with one another, typically on behalf of humans, in order to come to mutually acceptable agreements. With the advance of AI systems in recent years, it has become apparent that such agreements, in order to be acceptable to the involved parties, must remain aligned with ethical principles and moral values. However, this is notoriously difficult to ensure, especially as different human users (and their software agents) may hold different value systems, i.e. they may differently weigh the importance of individual moral values. Furthermore, it is often hard to specify the precise meaning of a value in a particular context in a computational manner. Methods to estimate value systems based on human-engineered specifications, e.g. based on value surveys, are limited in scale due to the need for intense human moderation. In this article, we propose a novel method to automatically \emph{learn} value systems from observations and human demonstrations. In particular, we propose a formal model of the \emph{value system learning} problem, its instantiation to sequential decision-making domains based on multi-objective Markov decision processes, as well as tailored preference-based and inverse reinforcement learning algorithms to infer value grounding functions and value systems. The approach is illustrated and evaluated by two simulated use cases.
翻译:协议技术指的是开放计算机系统,其中自主软件代理通常代表人类进行交互,以达成双方均可接受的协议。随着近年来人工智能系统的发展,此类协议若要被参与方接受,必须保持与伦理原则和道德价值的一致性。然而,这 notoriously 难以确保,特别是当不同的人类用户(及其软件代理)可能持有不同的价值系统时,即他们可能对个体道德价值的重要性赋予不同的权重。此外,通常难以用计算方式在特定情境中明确指定某一价值的确切含义。基于人工设计规范(例如基于价值调查)的价值系统估计方法,由于需要密集的人工调节,其规模受到限制。本文提出了一种从观察和人类示范中自动学习价值系统的新方法。具体而言,我们提出了价值系统学习问题的形式化模型,其在基于多目标马尔可夫决策过程的序列决策领域中的实例化,以及定制的基于偏好和逆强化学习算法,以推断价值基础函数和价值系统。该方法通过两个模拟用例进行了说明和评估。