Real-time probability forecasts for binary outcomes are routine in sports, online experimentation, medicine, and finance. Retrospective narratives, however, often hinge on pathwise extremes: for example, a forecast that becomes "90% certain" for an event that ultimately does not occur. Standard pointwise calibration tools do not quantify how frequently such extremes should arise under correct sequential calibration, where the ideal forecast sequence is a bounded martingale that ends at the realized outcome. We derive benchmark distributions for extreme-path functionals conditional on the terminal outcome, emphasizing the peak-on-loss: the largest forecast value attained along realizations that end in failure. In continuous time with continuous paths we obtain an exact closed-form benchmark; in discrete time we prove sharp finite-sample bounds together with an explicit correction decomposition that isolates terminal-step crossings and overshoots. These results yield model-agnostic null targets and one-sided tail probabilities for diagnosing sequential miscalibration from extreme-path behavior. We also develop competitive extensions tailored to win-probability feeds and illustrate the approach using ESPN win-probability series for NFL and NBA regular-season games (2018-2024), finding broad agreement with the benchmark in the NFL and systematic departures in the NBA.
翻译:暂无翻译