We propose a unified framework to enhance the power of online multiple hypothesis testing procedures based on $e$-values. While $e$-value-based methods offer robust online False Discovery Rate (FDR) control under minimal assumptions, they often suffer from power loss by discarding evidence that exceeds the rejection threshold. We address this inefficiency via the \textbf{S}equential \textbf{C}ontrol with \textbf{O}vershoot \textbf{R}efund for \textbf{E}-values (SCORE) framework, which leverages the inequality $\mathbb{I}(y \ge 1) \le y - (y-1)_+$ to reclaim this otherwise ``wasted'' evidence. This simple yet powerful insight yields a unified principle for improving a broad class of online testing algorithms. Building on this framework, we develop SCORE-enhanced versions of several state-of-the-art procedures, including SCORE-LOND, SCORE-LORD, and SCORE-SAFFRON, all of which strictly dominate their original counterparts while preserving valid finite-sample FDR control. Furthermore, under mild assumptions, SCORE permits retroactive updates of alpha-wealth by using the latest decision twice: first to determine its reward or loss, and then to refresh past wealth. Such a mechanism enables more aggressive testing strategies while maintaining valid FDR control, thereby further improving statistical power. The effectiveness of the proposed methods is validated through extensive simulation and real-data experiments.
翻译:我们提出了一个基于$e$值的统一框架,以增强在线多重假设检验程序的功效。虽然基于$e$值的方法在最小假设下提供了稳健的在线错误发现率控制,但它们通常会因丢弃超过拒绝阈值的证据而遭受功效损失。我们通过**S**equential **C**ontrol with **O**vershoot **R**efund for **E**-values框架解决了这一低效问题,该框架利用不等式$\mathbb{I}(y \ge 1) \le y - (y-1)_+$来回收这些原本被"浪费"的证据。这一简单而有力的洞见为改进一大类在线检验算法提供了统一原则。基于此框架,我们开发了若干最先进程序的SCORE增强版本,包括SCORE-LOND、SCORE-LORD和SCORE-SAFFRON,所有这些版本在保持有效有限样本FDR控制的同时,严格优于其原始对应版本。此外,在温和假设下,SCORE允许通过两次使用最新决策来追溯更新alpha财富:首先确定其奖励或损失,然后刷新过去的财富。这种机制在保持有效FDR控制的同时,支持更激进的检验策略,从而进一步提高了统计功效。所提方法的有效性通过广泛的模拟和真实数据实验得到了验证。