Sequential hypothesis testing asks for decision rules that update as data arrive. A natural goal is \emph{eventual correctness}: the rule may change its mind early on, but it should make only finitely many wrong decisions almost surely. Starting from Cover's theorem, which guarantees such behavior for membership in a countable set of candidate means, we ask a sharper question: \emph{which sets actually admit computable sequential decision procedures with finitely many errors?} We answer this optimally by giving a complete characterization both necessary and sufficient of the subsets of $\Q$ that admit a computable finite-error sequential membership test. We further extend the characterization to any \emph{effectively presented} countable family of real means, exactly the setting in which Cover's identification rule can be implemented computably. Beyond the technical boundary, the results clarify within a precise probabilistic setting what it can mean for inquiry to ``converge to the truth,'' and they formalize a limit to which empirical methods can be expected to succeed when only eventual stabilization (rather than fixed-time guarantees) is demanded. keywords: Cover's theorem, sequential decision procedures, finite error learning, limit computability, $Δ^0_2$ sets.
翻译:暂无翻译