Generative AI has received much attention in the image and language domains, with the transformer neural network continuing to dominate the state of the art. Application of these models to time series generation is less explored, however, and is of great utility to machine learning, privacy preservation, and explainability research. The present survey identifies this gap at the intersection of the transformer, generative AI, and time series data, and reviews works in this sparsely populated subdomain. The reviewed works show great variety in approach, and have not yet converged on a conclusive answer to the problems the domain poses. GANs, diffusion models, state space models, and autoencoders were all encountered alongside or surrounding the transformers which originally motivated the survey. While too open a domain to offer conclusive insights, the works surveyed are quite suggestive, and several recommendations for best practice, and suggestions of valuable future work, are provided.
翻译:生成式人工智能在图像与语言领域已获得广泛关注,其中Transformer神经网络持续占据技术前沿。然而,这些模型在时间序列生成中的应用仍较少被探索,而该方向对机器学习、隐私保护与可解释性研究具有重要价值。本综述聚焦于Transformer、生成式人工智能与时间序列数据交叉领域的研究空白,系统梳理了这一稀疏子领域的相关成果。现有研究在方法上呈现显著多样性,尚未就该领域核心问题形成共识性解决方案。除最初引发本综述的Transformer模型外,相关研究还涉及生成对抗网络、扩散模型、状态空间模型及自编码器等多元技术路径。尽管该领域尚处于开放探索阶段而难以得出确定性结论,但现有成果极具启发性。本文据此提出了若干最佳实践建议,并指出了未来值得深入探索的研究方向。