In many systems, the true data-generating process is unknown, requiring forecasters to rely on observed time series. This study proposes a pre-modeling diagnostic framework for horizon-specific forecastability assessment that evaluates forecastability before model selection begins. Forecastability is operationalized using auto-mutual information at lag h, which quantifies how much past observations reduce uncertainty about future values, estimated via a k-nearest-neighbor estimator computed strictly on training data to preserve out-of-sample validity. The diagnostic signal is validated against realized out-of-sample symmetric mean absolute percentage error across 42,355 time series spanning six temporal frequencies, using benchmark and higher-capacity probe models under a rolling-origin protocol. The results reveal a strong frequency-dependent relationship between measurable dependence and realized forecast error: for five of six frequencies, auto-mutual information exhibits a consistent negative rank association with realized error, supporting its use as a forecast triage signal for modeling investment decisions, whereas daily series show weaker discrimination despite measurable dependence. Across all frequencies, median forecast error declines monotonically from low to high forecastability terciles, demonstrating clear decision-relevant separation. Overall, the findings establish measurable past-future dependence as a practical screening tool for analytics-driven forecasting strategy, identifying when advanced models are likely to add value, when simple baselines suffice, and when attention should shift from accuracy improvement to robust decision design, thereby supporting a diagnostic-first approach to modeling effort and resource allocation in organizational forecasting contexts.
翻译:暂无翻译