Firms increasingly use randomized experiments to decide whether to scale up an intervention and, if so, how to re-optimize related operational choices such as inventory, capacity, or pricing. In many settings, experiments are performed on small samples, so the estimated effect of the intervention is uncertain. A common practice is to plug a 'significant' estimate of the effect into both (i) the rollout rule and (ii) the downstream optimization. However, this can lead to avoidable losses because the costs of over- versus under-estimating the effect are often asymmetric. The technically ideal approach is to obtain a data-dependent decision rule that minimizes the Bayes risk, but this lacks transparency and requires more computations. We propose Predict-Adjust-Then-Rollout-Optimize (PATRO), a plug-in approach that keeps the standard estimate, but makes data-independent adjustments, respectively, for the two types of decision. We show that the two adjustments can be substitutes or complements and provide an alternating-iteration method to compute the pair. PATRO performs both in theory and numerically close or equivalent to the Bayes-optimal benchmark, making it a simple, effective way to convert noisy experimental results into better rollout and operational decisions.
翻译:暂无翻译