Robust Mixture Prior (RMP) is a popular Bayesian dynamic borrowing method, which combines an informative historical distribution with a less informative component (referred as robustification component) in a mixture prior to enhance the efficiency of hybrid-control randomized trials. Current practice typically focuses solely on the selection of the prior weight that governs the relative influence of these two components, often fixing the variance of the robustification component to that of a single observation. In this study we demonstrate that the performance of RMPs critically depends on the joint selection of both weight and variance of the robustification component. In particular, we show that a wide range of weight-variance pairs can yield practically identical posterior inferences (in particular regions of the parameter space) and that large variance robust components may be employed without incurring in the so called Lindley's paradox. We further show that the use of large variance robustification components leads to improved asymptotic Type I error control and enhanced robustness of the RMP to the specification of the location parameter of the robustification component. Finally, we leverage these theoretical results to propose a novel and practical hyper-parameter elicitation routine.
翻译:稳健混合先验是一种流行的贝叶斯动态借用方法,它通过将信息丰富的历史分布与信息较少的稳健化分量结合于混合先验中,以提高混合对照随机试验的效率。当前实践通常仅关注控制这两个分量相对影响的先验权重选择,而常将稳健化分量的方差固定为单个观测值的方差。本研究证明,稳健混合先验的性能关键取决于稳健化分量的权重与方差的联合选择。具体而言,我们表明,广泛的权重-方差组合可以产生几乎完全相同的后验推断(特别是参数空间的区域),并且可以采用大方差稳健分量而不会引发所谓的林德利悖论。我们进一步证明,使用大方差稳健化分量可改善渐近第一类错误控制,并增强稳健混合先验对稳健化分量位置参数设定的稳健性。最后,我们利用这些理论结果提出了一种新颖且实用的超参数设定流程。