Explainability is crucial for complex systems like pervasive smart environments, as they collect and analyze data from various sensors, follow multiple rules, and control different devices resulting in behavior that is not trivial and, thus, should be explained to the users. The current approaches, however, offer flat, static, and algorithm-focused explanations. User-centric explanations, on the other hand, consider the recipient and context, providing personalized and context-aware explanations. To address this gap, we propose an approach to incorporate user-centric explanations into smart environments. We introduce a conceptual model and a reference architecture for characterizing and generating such explanations. Our work is the first technical solution for generating context-aware and granular explanations in smart environments. Our architecture implementation demonstrates the feasibility of our approach through various scenarios.
翻译:可解释性对于普适智能环境等复杂系统至关重要,因为这些系统会收集和分析来自各种传感器的数据,遵循多重规则,并控制不同设备,从而产生非简单的行为,因此需要向用户进行解释。然而,当前的方法仅提供扁平化、静态且以算法为中心的解释。相反,以用户为中心的解释会考虑接收者和上下文,提供个性化且情境感知的解释。为弥补这一不足,我们提出了一种将用户为中心的解释融入智能环境的方法。我们引入了一个概念模型和一个参考架构,用于刻画并生成此类解释。我们的工作是首个在智能环境中生成上下文感知且粒度化解释的技术解决方案。通过多种场景,我们的架构实现证明了该方法的可行性。