Online media platforms often need to measure how frequently users are exposed to specific content attributes in order to evaluate trade-offs in A/B experiments. A direct approach is to sample content, label it using a high-quality rubric (e.g., an expert-reviewed LLM prompt), and estimate impression-weighted prevalence. However, repeatedly running such labeling for every experiment arm and segment is too costly and slow to serve as a default measurement at scale. We present a scalable \emph{surrogate-based prevalence measurement} framework that decouples expensive labeling from per-experiment evaluation. The framework calibrates a surrogate signal to reference labels offline and then uses only impression logs to estimate prevalence for arbitrary experiment arms and segments. We instantiate this framework using \emph{score bucketing} as the surrogate: we discretize a model score into buckets, estimate bucket-level prevalences from an offline labeled sample, and combine these calibrated bucket level prevalences with the bucket distribution of impressions in each arm to obtain fast, log-based estimates. Across multiple large-scale A/B tests, we validate that the surrogate estimates closely match the reference estimates for both arm-level prevalence and treatment--control deltas. This enables scalable, low-latency prevalence measurement in experimentation without requiring per-experiment labeling jobs.
翻译:在线媒体平台通常需要测量用户接触特定内容属性的频率,以评估A/B实验中的权衡取舍。一种直接方法是对内容进行抽样,使用高质量标注标准(例如经过专家评审的大语言模型提示)进行标注,并估计曝光加权流行度。然而,为每个实验组和细分群体重复运行此类标注成本过高且速度缓慢,无法作为规模化默认测量方案。本文提出一种可扩展的**基于代理的流行度测量**框架,将昂贵的标注过程与逐实验评估解耦。该框架在线下将代理信号与参考标注进行校准,随后仅使用曝光日志即可估计任意实验组和细分群体的流行度。我们采用**分数分桶**作为代理实现该框架:将模型分数离散化为多个桶,通过线下标注样本估计桶级流行度,并将这些校准后的桶级流行度与各实验组曝光数据的桶分布相结合,从而获得基于日志的快速估计。在多个大规模A/B测试中,我们验证了代理估计值在实验组级流行度及处理-对照组差异方面均与参考估计值高度吻合。这使得实验中的流行度测量能够实现可扩展、低延迟的评估,无需为每个实验单独执行标注任务。