Over the past decade explainable artificial intelligence has evolved from a predominantly technical discipline into a field that is deeply intertwined with social sciences. Insights such as human preference for contrastive -- more precisely, counterfactual -- explanations have played a major role in this transition, inspiring and guiding the research in computer science. Other observations, while equally important, have nevertheless received much less consideration. The desire of human explainees to communicate with artificial intelligence explainers through a dialogue-like interaction has been mostly neglected by the community. This poses many challenges for the effectiveness and widespread adoption of such technologies as delivering a single explanation optimised according to some predefined objectives may fail to engender understanding in its recipients and satisfy their unique needs given the diversity of human knowledge and intention. Using insights elaborated by Niklas Luhmann and, more recently, Elena Esposito we apply social systems theory to highlight challenges in explainable artificial intelligence and offer a path forward, striving to reinvigorate the technical research in the direction of interactive and iterative explainers. Specifically, this paper demonstrates the potential of systems theoretical approaches to communication in elucidating and addressing the problems and limitations of human-centred explainable artificial intelligence.
翻译:过去十年间,可解释人工智能已从一门以技术为主的学科演变为与社会科学深度交织的领域。诸如人类偏好对比性——更准确地说,反事实性——解释的洞见在这一转变中发挥了重要作用,启发并指导了计算机科学领域的研究。然而,其他同样重要的观察却未得到足够重视。人类解释接收者期望通过类对话式交互与人工智能解释器进行沟通的诉求,在学界大多被忽视。这给此类技术的有效性和广泛采用带来了诸多挑战:鉴于人类知识和意图的多样性,仅提供根据某些预定目标优化的单一解释,可能无法使接收者真正理解,也无法满足其独特需求。借鉴尼克拉斯·卢曼及近期埃琳娜·埃斯波西托阐述的洞见,我们运用社会系统理论来揭示可解释人工智能面临的挑战,并提出一条前进路径,力求在交互式与迭代式解释器的方向上重振技术研究。具体而言,本文论证了系统理论沟通方法在阐明和解决以人为中心的可解释人工智能所面临问题与局限性方面的潜力。