We address the choice of penalty parameter in the Smoothness-Penalized Deconvolution (SPeD) method of estimating a probability density under additive measurement error. Cross-validation gives an unbiased estimate of the risk (for the present sample size n) with a given penalty parameter, and this function can be minimized as a function of the penalty parameter. Least-squares cross-validation, which has been proposed for the similar Deconvoluting Kernel Density Estimator (DKDE), performs quite poorly for SPeD. We instead estimate the risk function for a smaller sample size n_1 < n with a given penalty parameter, using this to choose the penalty parameter for sample size n_1, and then use the asymptotics of the optimal penalty parameter to choose for sample size n. In a simulation study, we find that this has dramatically better performance than cross-validation, is an improvement over a SURE-type method previously proposed for this estimator, and compares favorably to the classic DKDE with its recommended plug-in method. We prove that the maximum error in estimating the risk function is of smaller order than its optimal rate of convergence.
翻译:我们研究了在加性测量误差下估计概率密度的平滑惩罚去卷积(SPeD)方法中惩罚参数的选择问题。交叉验证可在给定惩罚参数下对当前样本量n的风险进行无偏估计,该函数可随惩罚参数变化而最小化。针对类似去卷积核密度估计器(DKDE)提出的最小二乘交叉验证方法在SPeD中表现极差。我们转而估计在给定惩罚参数下较小样本量n_1 < n的风险函数,据此为样本量n_1选择惩罚参数,然后利用最优惩罚参数的渐近性质为样本量n进行选择。模拟研究表明,该方法性能显著优于交叉验证,相较于此前针对该估计器提出的SURE型方法有所改进,且与经典DKDE及其推荐的插件法相比具有竞争力。我们证明,估计风险函数的最大误差要小于其最优收敛阶数。