In constrained stochastic optimization, one naturally expects that imposing a stricter feasible set does not increase the statistical risk of an estimator defined by projection onto that set. In this paper, we show that this intuition can fail even in canonical settings. We study the Gaussian sequence model, a deliberately austere test best, where for a compact, convex set $Θ\subset \mathbb{R}^d$ one observes \[ Y = θ^\star + σZ, \qquad Z \sim N(0, I_d), \] and seeks to estimate an unknown parameter $θ^\star \in Θ$. The natural estimator is the least squares estimator (LSE), which coincides with the Euclidean projection of $Y$ onto $Θ$. We construct an explicit example exhibiting \emph{risk reversal}: for sufficiently large noise, there exist nested compact convex sets $Θ_S \subset Θ_L$ and a parameter $θ^\star \in Θ_S$ such that the LSE constrained to $Θ_S$ has strictly larger risk than the LSE constrained to $Θ_L$. We further show that this phenomenon can persist at the level of worst-case risk, with the supremum risk over the smaller constraint set exceeding that over the larger one. We clarify this behavior by contrasting noise regimes. In the vanishing-noise limit, the risk admits a first-order expansion governed by the statistical dimension of the tangent cone at $θ^\star$, and tighter constraints uniformly reduce risk. In contrast, in the diverging-noise regime, the risk is determined by global geometric interactions between the constraint set and random noise directions. Here, the embedding of $Θ_S$ within $Θ_L$ can reverse the risk ordering. These results reveal a previously unrecognized failure mode of projection-based estimators: in sufficiently noisy settings, tightening a constraint can paradoxically degrade statistical performance.
翻译:在约束随机优化中,人们自然期望施加更严格的可行集不会增加通过投影到该集合上定义的估计量的统计风险。本文表明,即使在典型设定中,这种直觉也可能失效。我们研究高斯序列模型——一个刻意简化的测试平台,其中对于紧凸集 $Θ\subset \mathbb{R}^d$,观测到 \[ Y = θ^\star + σZ, \qquad Z \sim N(0, I_d), \] 并需要估计未知参数 $θ^\star \in Θ$。自然估计量是最小二乘估计量(LSE),即 $Y$ 到 $Θ$ 的欧几里得投影。我们构建了一个明确示例,展示了**风险反转**现象:当噪声足够大时,存在嵌套紧凸集 $Θ_S \subset Θ_L$ 及参数 $θ^\star \in Θ_S$,使得约束于 $Θ_S$ 的 LSE 的风险严格大于约束于 $Θ_L$ 的 LSE。我们进一步证明,这种现象在最坏情况风险层面也可能持续存在,即较小约束集上的上确界风险超过较大约束集上的上确界风险。我们通过对比噪声机制来阐明这一行为。在噪声趋于零的极限下,风险的一阶展开由 $θ^\star$ 处切锥的统计维度主导,更紧的约束会一致降低风险。相反,在噪声发散机制下,风险由约束集与随机噪声方向之间的全局几何相互作用决定。此时,$Θ_S$ 嵌入 $Θ_L$ 的方式可能导致风险排序反转。这些结果揭示了基于投影的估计量一种先前未被识别的失效模式:在足够嘈杂的环境中,收紧约束可能反而会降低统计性能。