In explanations, explainers have mental representations of explainees' developing knowledge and shifting interests regarding the explanandum. These mental representations are dynamic in nature and develop over time, thereby enabling explainers to react to explainees' needs by adapting and customizing the explanation. XAI should be able to react to explainees' needs in a similar manner. Therefore, a component that incorporates aspects of explainers' mental representations of explainees is required. In this study, we took first steps by investigating explainers' mental representations in everyday explanations of technological artifacts. According to the dual nature theory, technological artifacts require explanations with two distinct perspectives, namely observable and measurable features addressing "Architecture" or interpretable aspects addressing "Relevance". We conducted extended semi structured pre-, post- and video recall-interviews with explainers (N=9) in the context of an explanation. The transcribed interviews were analyzed utilizing qualitative content analysis. The explainers' answers regarding the explainees' knowledge and interests with regard to the technological artifact emphasized the vagueness of early assumptions of explainers toward strong beliefs in the course of explanations. The assumed knowledge of explainees in the beginning is centered around Architecture and develops toward knowledge with regard to both Architecture and Relevance. In contrast, explainers assumed higher interests in Relevance in the beginning to interests regarding both Architecture and Relevance in the further course of explanations. Further, explainers often finished the explanation despite their perception that explainees still had gaps in knowledge. These findings are transferred into practical implications relevant for user models for adaptive explainable systems.
翻译:在解释过程中,解释者会对解释对象关于被解释内容不断发展的知识和变化的兴趣形成心理表征。这些心理表征本质上是动态的,并随时间发展,从而使解释者能够通过调整和定制解释来回应解释对象的需求。可解释人工智能(XAI)应当能以类似方式回应用户需求。因此,需要构建一个能够融入解释者对解释对象心理表征特征的组件。本研究通过探究解释者在日常技术制品解释中的心理表征迈出了第一步。根据双重本质理论,技术制品需要从两个不同视角进行解释:即涉及"架构"的可观测、可测量特征,以及涉及"相关性"的可解释层面。我们在解释情境中对解释者(N=9)进行了扩展的半结构化前测、后测及视频回忆访谈。通过对转录访谈进行定性内容分析发现:解释者关于解释对象对技术制品的知识与兴趣的回答,突显了其早期假设的模糊性如何随着解释进程演变为坚定信念。解释者最初假设的解释对象知识主要围绕"架构"展开,随后逐渐发展为同时涵盖"架构"与"相关性"的知识体系。与之相对,解释者最初假设解释对象对"相关性"兴趣更高,而在后续解释过程中则转向对"架构"与"相关性"的双重兴趣。此外,尽管解释者感知到解释对象仍存在知识缺口,却常常选择结束解释。这些发现被转化为对自适应可解释系统用户模型具有实际意义的启示。