Time series forecasting can be viewed as a generative problem that requires both semantic understanding over contextual conditions and stochastic modeling of continuous temporal dynamics. Existing approaches typically rely on either autoregressive large language models (LLMs) for semantic context modeling or diffusion-like models for continuous probabilistic generation. However, neither method alone can adequately model both aspects simultaneously. In this work, we propose CoGenCast, a hybrid generative framework that couples pre-trained LLMs with flow-matching mechanism for effective time series forecasting. Specifically, we reconfigure pre-trained decoder-only LLMs into a native forecasting encoder-decoder backbone by modifying only the attention topology, enabling bidirectional context encoding and causal representation generation. Building on this, a flow-matching mechanism is further integrated to model temporal evolution, capturing continuous stochastic dynamics conditioned on the autoregressively generated representation. Notably, CoGenCast naturally supports multimodal forecasting and cross-domain unified training. Extensive experiments on multiple benchmarks show that CoGenCast consistently outperforms previous compared baselines. Code is available at https://github.com/liuyaguo/_CoGenCast.
翻译:时间序列预测可视为一个生成问题,既需要对上下文条件的语义理解,也需要对连续时间动态的随机建模。现有方法通常依赖于自回归大语言模型进行语义上下文建模,或依赖类扩散模型进行连续概率生成。然而,单独任何一种方法都无法充分同时建模这两个方面。在本工作中,我们提出CoGenCast,一种将预训练大语言模型与流匹配机制耦合的混合生成框架,用于有效的时间序列预测。具体而言,我们仅通过修改注意力拓扑结构,将预训练的仅解码器大语言模型重新配置为原生预测编码器-解码器骨干网络,从而实现双向上下文编码和因果表示生成。在此基础上,进一步集成流匹配机制来建模时间演化,捕捉以自回归生成表示为条件的连续随机动态。值得注意的是,CoGenCast天然支持多模态预测和跨域统一训练。在多个基准测试上的大量实验表明,CoGenCast始终优于先前对比的基线方法。代码可在 https://github.com/liuyaguo/_CoGenCast 获取。