A/B testing has become the cornerstone of decision-making in online markets, guiding how platforms launch new features, optimize pricing strategies, and improve user experience. In practice, we typically employ the pairwise $t$-test to compare outcomes between the treatment and control groups, thereby assessing the effectiveness of a given strategy. To be trustworthy, these experiments must keep Type I error (i.e., false positive rate) under control; otherwise, we may launch harmful strategies. However, in real-world applications, we find that A/B testing often fails to deliver reliable results. When the data distribution departs from normality or when the treatment and control groups differ in sample size, the commonly used pairwise $t$-test is no longer trustworthy. In this paper, we quantify how skewed, long tailed data and unequal allocation distort error rates and derive explicit formulas for the minimum sample size required for the $t$-test to remain valid. We find that many online feedback metrics require hundreds of millions samples to ensure reliable A/B testing. Thus we introduce an Edgeworth-based correction that provides more accurate $p$-values when the available sample size is limited. Offline experiments on a leading A/B testing platform corroborate the practical value of our theoretical minimum sample size thresholds and demonstrate that the corrected method substantially improves the reliability of A/B testing in real-world conditions.
翻译:A/B测试已成为在线市场决策制定的基石,指导平台如何发布新功能、优化定价策略并改善用户体验。在实践中,我们通常采用配对$t$检验来比较处理组与对照组的结果,从而评估特定策略的有效性。为保证可信度,这些实验必须将第一类错误(即误报率)控制在可接受范围内,否则可能推出有害策略。然而,在实际应用中,我们发现A/B测试往往无法提供可靠结果。当数据分布偏离正态性,或处理组与对照组的样本量存在差异时,常用的配对$t$检验不再可信。本文量化了偏态分布、长尾数据及不等量分配如何扭曲错误率,并推导出$t$检验保持有效性所需的最小样本量的显式公式。研究发现,许多在线反馈指标需要数亿样本才能确保可靠的A/B测试。因此,我们提出一种基于Edgeworth展开的校正方法,在可用样本量有限时提供更精确的$p$值。在领先A/B测试平台上的离线实验证实了我们理论最小样本量阈值的实用价值,并表明校正方法能显著提升现实条件下A/B测试的可靠性。