Foundation models have demonstrated remarkable generalization, data efficiency, and robustness properties across various domains. In this paper, we explore the feasibility of foundation models for applications in the control domain. The success of these models is enabled by large-scale pretaining on Internet-scale datasets. These are available in fields like natural language processing and computer vision, but do not exist for dynamical systems. We address this challenge by pretraining a transformer-based foundation model exclusively on synthetic data and propose to sample dynamics functions from a reproducing kernel Hilbert space. Our pretrained model generalizes for prediction tasks across different dynamical systems, which we validate in simulation and hardware experiments, including cart-pole and Furuta pendulum setups. Additionally, the model can be fine-tuned effectively to new systems to increase performance even further. Our results demonstrate the feasibility of foundation models for dynamical systems that outperform specialist models in terms of generalization, data efficiency, and robustness.
翻译:基础模型在多个领域已展现出卓越的泛化能力、数据效率与鲁棒性。本文探讨基础模型在控制领域应用的可行性。此类模型成功的关键在于对互联网规模数据集的大规模预训练,这类数据在自然语言处理与计算机视觉领域较为丰富,但尚未存在于动力学系统领域。针对这一挑战,我们提出在纯合成数据上预训练基于Transformer的基础模型,并通过从再生核希尔伯特空间中采样动力学函数来生成训练数据。预训练模型在不同动力学系统的预测任务中展现出泛化能力,我们在仿真与硬件实验(包括小车倒立摆与Furuta摆装置)中验证了其有效性。此外,该模型可通过微调适配新系统以进一步提升性能。实验结果表明,所提出的动力学系统基础模型在泛化性、数据效率与鲁棒性方面均优于专用模型,验证了该方法的可行性。