The power prior is a popular class of informative priors for incorporating information from historical data. It involves raising the likelihood for the historical data to a power, which acts as discounting parameter. When the discounting parameter is modelled as random, the normalized power prior is recommended. In this work, we prove that the marginal posterior for the discounting parameter for generalized linear models converges to a point mass at zero if there is any discrepancy between the historical and current data, and that it does not converge to a point mass at one when they are fully compatible. In addition, we explore the construction of optimal priors for the discounting parameter in a normalized power prior. In particular, we are interested in achieving the dual objectives of encouraging borrowing when the historical and current data are compatible and limiting borrowing when they are in conflict. We propose intuitive procedures for eliciting the shape parameters of a beta prior for the discounting parameter based on two minimization criteria, the Kullback-Leibler divergence and the mean squared error. Based on the proposed criteria, the optimal priors derived are often quite different from commonly used priors such as the uniform prior.
翻译:功效先验是一类流行的信息型先验,用于整合历史数据信息。该方法将历史数据的似然函数提升至某次幂,该幂次即为折扣参数。当折扣参数被建模为随机变量时,推荐使用归一化功效先验。本研究证明,对于广义线性模型,若历史数据与当前数据存在差异,折扣参数的边际后验分布将收敛于零点处的点质量;即使两者完全兼容,也不会收敛于一点处的点质量。此外,我们探讨了归一化功效先验中折扣参数的最优先验构造方法。具体而言,我们致力于实现双重目标:在历史数据与当前数据兼容时促进信息借用,在两者冲突时限制信息借用。我们提出基于最小化Kullback-Leibler散度和均方误差这两种准则的直观流程,以获取折扣参数Beta先验的形状参数。根据所提准则导出的最优先验通常与均匀先验等常用先验存在显著差异。