Bayes' rule has enabled innumerable powerful algorithms of statistical signal processing and statistical machine learning. However, when model misspecifications exist in prior and/or data distributions, the direct application of Bayes' rule is questionable. Philosophically, the key is to balance the relative importance between the prior information and the data evidence when calculating posterior distributions: If prior distributions are overly conservative (i.e., exceedingly spread), we upweight the prior belief; if prior distributions are overly aggressive (i.e., exceedingly concentrated), we downweight the prior belief. The same operation also applies to likelihood distributions, which are defined as normalized likelihoods if the normalization exists. This paper studies a generalized Bayes' rule, called uncertainty-aware (UA) Bayes' rule, to technically realize the above philosophy, thus combating model uncertainties in prior and/or data distributions. In particular, the advantage of the proposed UA Bayes' rule over the existing power posterior (i.e., $α$-posterior) is investigated. Applications of the UA Bayes' rule on classification and estimation are discussed: Specifically, the UA naive Bayes classifier, the UA Kalman filter, the UA particle filter, and the UA interactive-multiple-model filter are suggested and experimentally validated.
翻译:贝叶斯规则已催生了统计信号处理和统计机器学习领域无数强大的算法。然而,当先验分布和/或数据分布存在模型误设时,直接应用贝叶斯规则是值得商榷的。从哲学角度看,关键在于计算后验分布时平衡先验信息与数据证据的相对重要性:若先验分布过于保守(即过度分散),则我们提升先验信念的权重;若先验分布过于激进(即过度集中),则我们降低先验信念的权重。该操作同样适用于似然分布(若归一化存在,则定义为归一化似然)。本文研究了一种广义贝叶斯规则,称为不确定性感知(UA)贝叶斯规则,以技术手段实现上述哲学理念,从而应对先验分布和/或数据分布中的模型不确定性。特别地,本文探讨了所提出的UA贝叶斯规则相较于现有幂后验(即$α$-后验)的优势。文中讨论了UA贝叶斯规则在分类与估计任务中的应用:具体提出了UA朴素贝叶斯分类器、UA卡尔曼滤波器、UA粒子滤波器及UA交互式多模型滤波器,并通过实验验证了其有效性。