In belief revision, agents typically modify their beliefs when they receive some new piece of information that is in conflict with them. The guiding principle behind most belief revision frameworks is that of minimalism, which advocates minimal changes to existing beliefs. However, minimalism may not necessarily capture the nuanced ways in which human agents reevaluate and modify their beliefs. In contrast, the explanatory hypothesis indicates that people are inherently driven to seek explanations for inconsistencies, thereby striving for explanatory coherence rather than minimal changes when revising beliefs. Our contribution in this paper is two-fold. Motivated by the explanatory hypothesis, we first present a novel, yet simple belief revision operator that, given a belief base and an explanation for an explanandum, it revises the belief bases in a manner that preserves the explanandum and is not necessarily minimal. We call this operator explanation-based belief revision. Second, we conduct two human-subject studies to empirically validate our approach and investigate belief revision behavior in real-world scenarios. Our findings support the explanatory hypothesis and provide insights into the strategies people employ when resolving inconsistencies.
翻译:在信念修正中,智能体通常在接收到与现有信念相冲突的新信息时修改其信念。大多数信念修正框架背后的指导原则是最小主义,即主张对现有信念进行最小程度的改变。然而,最小主义未必能捕捉人类智能体重新评估和修改信念的微妙方式。相比之下,解释性假说指出,人类本质上倾向于为不一致性寻求解释,因此在修正信念时追求的是解释的连贯性,而非最小改变。本文的贡献有两点。受解释性假说的启发,我们首先提出了一种新颖而简单的信念修正算子:给定一个信念库和一个对解释项的解释,该算子以保留解释项且不一定是最小改变的方式修正信念库。我们称此算子为基于解释的信念修正。其次,我们进行了两项人类受试者研究,以实证验证我们的方法,并探究现实场景中的信念修正行为。我们的研究结果支持解释性假说,并为人们解决不一致性时所采用的策略提供了见解。