We address the classical problem of constructing confidence intervals (CIs) for the mean of a distribution, given \(N\) i.i.d. samples, such that the CI contains the true mean with probability at least \(1 - \delta\), where \(\delta \in (0,1)\). We characterize three distinct learning regimes based on the minimum achievable limiting width of any CI as the sample size \(N_{\delta} \to \infty\) and \(\delta \to 0\). In the first regime, where \(N_{\delta}\) grows slower than \(\log(1/\delta)\), the limiting width of any CI equals the width of the distribution's support, precluding meaningful inference. In the second regime, where \(N_{\delta}\) scales as \(\log(1/\delta)\), we precisely characterize the minimum limiting width, which depends on the scaling constant. In the third regime, where \(N_{\delta}\) grows faster than \(\log(1/\delta)\), complete learning is achievable, and the limiting width of the CI collapses to zero, converging to the true mean. We demonstrate that CIs derived from concentration inequalities based on Kullback--Leibler (KL) divergences achieve asymptotically optimal performance, attaining the minimum limiting width in both sufficient and complete learning regimes for distributions in two families: single-parameter exponential and bounded support. Additionally, these results extend to one-sided CIs, with the width notion adjusted appropriately. Finally, we generalize our findings to settings with random per-sample costs, motivated by practical applications such as stochastic simulators and cloud service selection. Instead of a fixed sample size, we consider a cost budget \(C_{\delta}\), identifying analogous learning regimes and characterizing the optimal CI construction policy.
翻译:我们研究一个经典问题:给定 \(N\) 个独立同分布样本,构造一个置信区间(CI),使得该区间以至少 \(1 - \delta\) 的概率包含真实均值,其中 \(\delta \in (0,1)\)。我们根据当样本量 \(N_{\delta} \to \infty\) 且 \(\delta \to 0\) 时,任何置信区间可达到的最小极限宽度,刻画了三种不同的学习机制。在第一种机制中,当 \(N_{\delta}\) 的增长慢于 \(\log(1/\delta)\) 时,任何置信区间的极限宽度等于分布支撑的宽度,从而无法进行有意义的推断。在第二种机制中,当 \(N_{\delta}\) 按 \(\log(1/\delta)\) 的比例增长时,我们精确刻画了最小极限宽度,该宽度取决于比例常数。在第三种机制中,当 \(N_{\delta}\) 的增长快于 \(\log(1/\delta)\) 时,可实现完全学习,置信区间的极限宽度收敛至零,逼近真实均值。我们证明,基于 Kullback–Leibler(KL)散度的集中不等式所导出的置信区间,在渐近意义上达到了最优性能,对于两类分布族——单参数指数族和有界支撑分布——在充分学习和完全学习机制中均达到了最小极限宽度。此外,这些结果可推广至单侧置信区间,此时宽度的概念需作相应调整。最后,我们将研究结果推广到具有随机每样本成本的场景,其动机来自随机模拟器和云服务选择等实际应用。我们不再考虑固定样本量,而是考虑一个成本预算 \(C_{\delta}\),识别出类似的学习机制,并刻画了最优的置信区间构造策略。