Online user-generated content platforms allocate billions of dollars of promotional traffic through algorithms in two-sided marketplaces. To evaluate updates to these algorithms, platforms frequently rely on creator-side randomized experiments. However, because treated and control creators compete for exposure, such experiments suffer from algorithmic interference: exposure outcomes depend on competitors' treatment status. We show that commonly used difference-in-means estimators can therefore be severely biased and may even recommend deploying inferior algorithms. To address this challenge, we develop a structured semiparametric framework that explicitly models the competitive allocation mechanism underlying exposure. Our approach combines an algorithm choice model that characterizes how exposure is allocated across competing content with a viewer response model that captures engagement conditional on exposure. We construct a debiased estimator grounded in the double machine learning framework to recover the global treatment effect of platform-wide rollout. Methodologically, we extend DML asymptotic theory to accommodate correlated samples arising from overlapping consideration sets. Using Monte Carlo simulations and a large-scale field experiment on a major short-video platform, we show that our estimator closely matches an interference-free benchmark obtained from a costly double-sided experimental design. In contrast, standard estimators exhibit substantial bias and, in some cases, even reverse the sign of the effect.
翻译:暂无翻译