Humans and machines interact more frequently than ever and our societies are becoming increasingly hybrid. A consequence of this hybridisation is the degradation of societal trust due to the prevalence of AI-enabled deception. Yet, despite our understanding of the role of trust in AI in the recent years, we still do not have a computational theory to be able to fully understand and explain the role deception plays in this context. This is a problem because while our ability to explain deception in hybrid societies is delayed, the design of AI agents may keep advancing towards fully autonomous deceptive machines, which would pose new challenges to dealing with deception. In this paper we build a timely and meaningful interdisciplinary perspective on deceptive AI and reinforce a 20 year old socio-cognitive perspective on trust and deception, by proposing the development of DAMAS -- a holistic Multi-Agent Systems (MAS) framework for the socio-cognitive modelling and analysis of deception. In a nutshell this paper covers the topic of modelling and explaining deception using AI approaches from the perspectives of Computer Science, Philosophy, Psychology, Ethics, and Intelligence Analysis.
翻译:人类与机器的交互比以往任何时候都更加频繁,我们的社会正日益走向混合化。这种混合化的一个后果是,由于人工智能驱动的欺骗日益普遍,社会信任正在退化。然而,尽管近年来我们理解了信任在人工智能中的作用,但我们仍然缺乏一个计算理论来充分理解和解释欺骗在这种情境下所扮演的角色。这是一个问题,因为当我们在混合社会中解释欺骗的能力被延迟时,人工智能体的设计可能会持续朝着完全自主的欺骗性机器发展,这将给应对欺骗带来新的挑战。在本文中,我们建立了一个及时且富有意义的关于欺骗性人工智能的跨学科视角,并通过提出开发DAMAS——一个用于欺骗的社会认知建模与分析的整体性多智能体系统(MAS)框架——来强化一个已有20年历史的关于信任与欺骗的社会认知视角。简而言之,本文涵盖了从计算机科学、哲学、心理学、伦理学及情报分析等视角,使用人工智能方法对欺骗进行建模与解释的主题。