Time-series forecasting in real-world applications such as finance and energy often faces challenges due to limited training data and complex, noisy temporal dynamics. Existing deep forecasting models typically supervise predictions using full-length temporal windows, which include substantial high-frequency noise and obscure long-term trends. Moreover, auxiliary variables containing rich domain-specific information are often underutilized, especially in few-shot settings. To address these challenges, we propose LoFT-LLM, a frequency-aware forecasting pipeline that integrates low-frequency learning with semantic calibration via a large language model (LLM). Firstly, a Patch Low-Frequency forecasting Module (PLFM) extracts stable low-frequency trends from localized spectral patches. Secondly, a residual learner then models high-frequency variations. Finally, a fine-tuned LLM refines the predictions by incorporating auxiliary context and domain knowledge through structured natural language prompts. Extensive experiments on financial and energy datasets demonstrate that LoFT-LLM significantly outperforms strong baselines under both full-data and few-shot regimes, delivering superior accuracy, robustness, and interpretability.
翻译:在金融和能源等实际应用场景中,时间序列预测常因训练数据有限以及复杂、含噪的时序动态而面临挑战。现有的深度预测模型通常使用完整长度的时间窗口来监督预测,这包含了大量高频噪声,并模糊了长期趋势。此外,包含丰富领域特定信息的辅助变量往往未得到充分利用,尤其是在少样本场景下。为应对这些挑战,我们提出了LoFT-LLM,一种频率感知的预测流程,它通过大语言模型将低频学习与语义校准相结合。首先,一个分块低频预测模块从局部频谱块中提取稳定的低频趋势。其次,一个残差学习器随后对高频变化进行建模。最后,一个经过微调的大语言模型通过结构化的自然语言提示,融入辅助上下文和领域知识,从而优化预测结果。在金融和能源数据集上进行的大量实验表明,无论是在全数据还是少样本机制下,LoFT-LLM均显著优于强基线模型,提供了更优的准确性、鲁棒性和可解释性。