Despite advancements in high-performance computing and modern numerical algorithms, computational cost remains prohibitive for multi-query kinetic plasma simulations. In this work, we develop data-driven reduced-order models (ROMs) for collisionless electrostatic plasma dynamics, based on the kinetic Vlasov-Poisson equation. Our ROM approach projects the equation onto a linear subspace defined by the proper orthogonal decomposition (POD) modes. We introduce an efficient tensorial method to update the nonlinear term using a precomputed third-order tensor. We capture multiscale behavior with a minimal number of POD modes by decomposing the solution manifold into multiple time windows and creating temporally local ROMs. We consider two strategies for decomposition: one based on the physical time and the other based on the electric field energy. Applied to the 1D1V Vlasov-Poisson simulations, that is, prescribed E-field, Landau damping, and two-stream instability, we demonstrate that our ROMs accurately capture the total energy of the system both for parametric and time extrapolation cases. The temporally local ROMs are more efficient and accurate than the single ROM. In addition, in the two-stream instability case, we show that the energy-windowing reduced-order model (EW-ROM) is more efficient and accurate than the time-windowing reduced-order model (TW-ROM). With the tensorial approach, EW-ROM solves the equation approximately 90 times faster than Eulerian simulations while maintaining a maximum relative error of 7.5% for the training data and 11% for the testing data.
翻译:尽管高性能计算和现代数值算法取得了进展,但对于多查询动理学等离子体模拟而言,计算成本仍然过高。在这项工作中,我们基于动理学Vlasov-Poisson方程,为无碰撞静电等离子体动力学开发了数据驱动的降阶模型(ROMs)。我们的ROM方法将方程投影到由本征正交分解(POD)模态定义的线性子空间上。我们引入了一种高效的张量方法,利用预计算的三阶张量更新非线性项。通过将解流形分解为多个时间窗口并创建时间局部ROM,我们以最少数量的POD模态捕捉多尺度行为。我们考虑了两种分解策略:一种基于物理时间,另一种基于电场能量。应用于一维空间一维速度的Vlasov-Poisson模拟(即预设电场、朗道阻尼和双流不稳定性),我们证明了我们的ROMs在参数和时间外推情况下都能准确捕捉系统的总能量。时间局部ROM比单一ROM更高效、更准确。此外,在双流不稳定性案例中,我们展示了能量窗口降阶模型(EW-ROM)比时间窗口降阶模型(TW-ROM)更高效、更准确。采用张量方法,EW-ROM求解方程的速度比欧拉模拟快约90倍,同时保持训练数据的最大相对误差为7.5%,测试数据的最大相对误差为11%。