Monitoring the performance of classification models in production is critical yet challenging due to strict labeling budgets, one-shot batch acquisition of labels and extremely low error rates. We propose a general framework based on Stratified Importance Sampling (SIS) that directly addresses these constraints in model monitoring. While SIS has previously been applied in specialized domains, our theoretical analysis establishes its broad applicability to the monitoring of classification models. Under mild conditions, SIS yields unbiased estimators with strict finite-sample mean squared error (MSE) improvements over both importance sampling (IS) and stratified random sampling (SRS). The framework does not rely on optimally defined proposal distributions or strata: even with noisy proxies and sub-optimal stratification, SIS can improve estimator efficiency compared to IS or SRS individually, though extreme proposal mismatch may limit these gains. Experiments across binary and multiclass tasks demonstrate consistent efficiency improvements under fixed label budgets, underscoring SIS as a principled, label-efficient, and operationally lightweight methodology for post-deployment model monitoring.
翻译:在生产环境中监控分类模型的性能至关重要,但由于严格的标注预算、一次性批量获取标签以及极低的错误率,这一任务极具挑战性。我们提出了一个基于分层重要性采样(SIS)的通用框架,直接应对模型监控中的这些约束。尽管SIS先前已在特定领域得到应用,我们的理论分析证明了其在分类模型监控中的广泛适用性。在温和条件下,SIS能够产生无偏估计量,其有限样本均方误差(MSE)相较于重要性采样(IS)和分层随机采样(SRS)均有严格改进。该框架不依赖于最优定义的提议分布或分层:即使使用噪声代理和次优分层,SIS相比单独的IS或SRS仍能提升估计效率,但极端的提议分布失配可能限制这些增益。在二分类与多分类任务上的实验表明,在固定标注预算下,SIS能持续提升效率,这凸显了其作为一种原理明确、标签高效且操作轻量的部署后模型监控方法的优势。