We consider a broad class of permutation invariant statistical problems by extending the standard decision theoretic definition to allow also selective inference tasks, where the target is specified only after seeing the data. For any such problem we show that, among all permutation invariant procedures, the minimizer of the risk at $\boldsymbol{\theta}$ is precisely the rule that minimizes the Bayes risk under a (postulated) discrete prior assigning equal probability to every permutation of $\boldsymbol{\theta}$. This gives an explicit characterization of the greatest lower bound on the risk of every sensible procedure in a wide range of problems. Furthermore, in a permutation invariant problem of estimating the parameter of a selected population under squared loss, we prove that this lower bound coincides asymptotically with a simpler lower bound, attained by the Bayes solution that replaces the aforementioned uniform prior on all permutations of $\boldsymbol{\theta}$ by the i.i.d. prior with the same marginals. This has important algorithmic implications because it suggests that our greatest lower bound is asymptotically attainable uniformly in $\boldsymbol{\theta}$ by an empirical Bayes procedure. Altogether, the above extends theory that has been established in the existing literature only for the very special case of compound decision problems.
翻译:我们通过扩展标准决策理论的定义,将选择性推断任务也纳入考虑,从而研究了一类广泛的置换不变统计问题。在这类问题中,目标是在观测数据后才被指定的。对于任何此类问题,我们证明:在所有置换不变程序中,在参数 $\boldsymbol{\theta}$ 处风险的最小化者,正是那个在(假设的)离散先验下使得贝叶斯风险最小的规则,该先验对 $\boldsymbol{\theta}$ 的每一个置换赋予相等的概率。这为广泛问题中每个合理程序的风险下确界提供了一个明确的刻画。此外,在一个关于在平方损失下估计选定总体参数的置换不变问题中,我们证明该下界渐近地等同于一个更简单的下界,后者可通过贝叶斯解达到,该解用具有相同边缘分布的独立同分布先验取代了前述关于 $\boldsymbol{\theta}$ 所有置换的均匀先验。这具有重要的算法意义,因为它表明我们的最大下界在 $\boldsymbol{\theta}$ 上一致地可由一个经验贝叶斯程序渐近达到。总而言之,上述结果扩展了现有文献中仅针对复合决策问题这一特殊情形所建立的理论。