Many functionals of interest in statistics and machine learning can be written as minimizers of expected loss functions. Such functionals are called $M$-estimands, and can be estimated by $M$-estimators -- minimizers of empirical average losses. Traditionally, statistical inference (e.g., hypothesis tests and confidence sets) for $M$-estimands is obtained by proving asymptotic normality of $M$-estimators centered at the target. However, asymptotic normality is only one of several possible limiting distributions and (asymptotically) valid inference becomes significantly difficult with non-normal limits. In this paper, we provide conditions for the symmetry of three general classes of limiting distributions, enabling inference using HulC (Kuchibhotla et al. (2024)).
翻译:统计学与机器学习中许多关注的功能量可表示为期望损失函数的最小化点。此类功能量称为$M$估计对象,可通过$M$估计量——即经验平均损失的最小化点——进行估计。传统上,$M$估计对象的统计推断(如假设检验与置信集)通过证明以目标值为中心的$M$估计量的渐近正态性获得。然而,渐近正态性仅是多种可能的极限分布之一,当极限分布非正态时,(渐近)有效推断将变得极为困难。本文针对三类广义极限分布的对称性给出条件,使得利用HulC方法(Kuchibhotla等人,2024)进行推断成为可能。