Classical estimation outputs a single point estimate of an unknown $d$-dimensional vector from an observation. In this paper, we study \emph{$k$-list estimation}, in which a single observation is used to produce a list of $k$ candidate estimates and performance is measured by the expected squared distance from the true vector to the closest candidate. We compare this centralized setting with a symmetric decentralized MMSE benchmark in which $k$ agents observe conditionally i.i.d.\ measurements and each agent outputs its own MMSE estimate. On the centralized side, we show that optimal $k$-list estimation is equivalent to fixed-rate $k$-point vector quantization of the posterior distribution and, under standard regularity conditions, admits an exact high-rate asymptotic expansion with explicit constants and decay rate $k^{-2/d}$. On the decentralized side, we derive lower bounds in terms of the small-ball behavior of the single-agent MMSE error; in particular, when the conditional error density is bounded near the origin, the benchmark distortion cannot decay faster than order $k^{-2/d}$. We further show that if the error density vanishes at the origin, then the decentralized benchmark is provably unable to match the centralized $k^{-2/d}$ exponent, whereas the centralized estimator retains that scaling. Gaussian specializations yield explicit formulas and numerical experiments corroborate the predicted asymptotic behavior. Overall, the results show that, in the scaling with $k$, one observation combined with $k$ carefully chosen candidates can be asymptotically as effective as -- and in some regimes strictly better than -- this MMSE-based decentralized benchmark with $k$ independent observations.
翻译:经典估计方法通过单次观测输出未知$d$维向量的单个点估计。本文研究\emph{$k$-列表估计},其中利用单次观测生成包含$k$个候选估计值的列表,性能通过真实向量到最近候选向量之期望平方距离衡量。我们将该集中式设置与对称分散式MMSE基准进行比较:后者中$k$个智能体观测条件独立同分布测量值,每个智能体输出各自的MMSE估计。在集中式方面,我们证明最优$k$-列表估计等价于后验分布的固定速率$k$点向量量化,且在标准正则条件下,该估计具有带显式常数和衰减率$k^{-2/d}$的精确高率渐近展开。在分散式方面,我们基于单智能体MMSE误差的小球行为导出下界;特别地,当条件误差密度在原点附近有界时,基准畸变衰减率不能快于$k^{-2/d}$量级。进一步证明:若误差密度在原点处为零,则分散式基准无法达到集中式的$k^{-2/d}$指数,而集中式估计器保持该标度律。高斯特例给出显式公式,数值实验验证了预测渐近行为。总体结果表明,在$k$增长标度下,单次观测配合精心选择的$k$个候选估计在渐近意义上可与基于$k$次独立观测的MMSE分散式基准同样有效——在某些区域甚至严格更优。