The increasing reliance on human preference feedback to judge AI-generated pseudo labels has created a pressing need for principled, budget-conscious data acquisition strategies. We address the crucial question of how to optimally allocate a fixed annotation budget between ground-truth labels and pairwise preferences in AI. Our solution, grounded in semi-parametric inference, casts the budget allocation problem as a monotone missing data framework. Building on this formulation, we introduce Preference-Calibrated Active Learning (PCAL), a novel method that learns the optimal data acquisition strategy and develops a statistically efficient estimator for functionals of the data distribution. Theoretically, we prove the asymptotic optimality of our PCAL estimator and establish a key robustness guarantee that ensures robust performance even with poorly estimated nuisance models. Our flexible framework applies to a general class of problems, by directly optimizing the estimator's variance instead of requiring a closed-form solution. This work provides a principled and statistically efficient approach for budget-constrained learning in modern AI. Simulations and real-data analysis demonstrate the practical benefits and superior performance of our proposed method.
翻译:随着人类偏好反馈在评判AI生成的伪标签方面日益受到依赖,制定原则性、预算敏感的数据获取策略已成为迫切需求。我们解决了如何在AI领域中,将固定标注预算最优分配于真实标签与成对偏好之间的关键问题。基于半参数推断理论,我们的解决方案将预算分配问题构建为单调缺失数据框架。在此基础上,我们提出了偏好校准主动学习(PCAL)这一创新方法,该方法通过学习最优数据获取策略,并为数据分布泛函构建了统计高效的估计器。从理论上,我们证明了PCAL估计器的渐近最优性,并建立了关键鲁棒性保证,确保即使在干扰模型估计不佳的情况下仍能保持稳健性能。我们的灵活框架适用于广泛问题类别,通过直接优化估计器方差而非要求闭式解来实现。本研究为现代AI中的预算约束学习提供了原则性且统计高效的方法。仿真实验与真实数据分析证明了所提方法的实际优势与卓越性能。