The domain decomposition (DD) nonlinear-manifold reduced-order model (NM-ROM) represents a computationally efficient method for integrating underlying physics principles into a neural network-based, data-driven approach. Compared to linear subspace methods, NM-ROMs offer superior expressivity and enhanced reconstruction capabilities, while DD enables cost-effective, parallel training of autoencoders by partitioning the domain into algebraic subdomains. In this work, we investigate the scalability of this approach by implementing a "bottom-up" strategy: training NM-ROMs on smaller domains and subsequently deploying them on larger, composable ones. The application of this method to the two-dimensional time-dependent Burgers' equation shows that extrapolating from smaller to larger domains is both stable and effective. This approach achieves an accuracy of 1% in relative error and provides a remarkable speedup of nearly 700 times.
翻译:域分解非线性流形降阶模型是一种将基础物理原理融入基于神经网络的数据驱动方法中的高效计算技术。与线性子空间方法相比,非线性流形降阶模型具有更强的表达能力和改进的重构性能,而域分解通过将计算域划分为代数子区域,实现了自编码器的高效并行训练。本研究通过实施"自底向上"策略探讨了该方法的可扩展性:先在较小计算域上训练非线性流形降阶模型,随后将其部署到更大的可组合计算域中。将该方法应用于二维时变Burgers方程的结果表明,从小计算域向大计算域的外推过程既稳定又有效。该方法实现了1%的相对误差精度,并获得了近700倍的显著加速效果。