A central problem in the theory of empirical Bayes is to control the regret (excess risk) of a learned Bayes rule by the Hellinger distance between the estimated and true marginal densities. In the normal means model, the classical result of Jiang and Zhang (2009, Annals of Statistics) achieves this only after regularizing the Bayes rule and incurs an extraneous cubic logarithmic factor through a delicate recursive argument. This paper introduces a new technique, based on polynomial approximation and Bernstein-type inequalities for weighted $L_2$ norms, that bounds the unregularized regret directly. The method is conceptually simpler and yields sharper, sometimes optimal, regret bounds. For compactly supported priors, we prove the sharp bound that the regret is at most $O(ε^2 \log(1/ε)/\log\log(1/ε))$, where $ε$ is the Hellinger distance between the marginal densities. The same method also extends to priors with exponential tails. Conversely, we show that regularization is genuinely necessary for heavy-tailed priors under only bounded moment assumptions. As a statistical consequence, we obtain improved regret bounds for the nonparametric maximum likelihood estimator.
翻译:暂无翻译