Deep Operator Networks (DeepONets) are among the most prominent frameworks for operator learning, grounded in the universal approximation theorem for operators. However, training DeepONets typically requires significant computational resources. To address this limitation, we propose ELM-DeepONets, an Extreme Learning Machine (ELM) framework for DeepONets that leverages the backpropagation-free nature of ELM. By reformulating DeepONet training as a least-squares problem for newly introduced parameters, the ELM-DeepONet approach significantly reduces training complexity. Validation on benchmark problems, including nonlinear ODEs and PDEs, demonstrates that the proposed method not only achieves superior accuracy but also drastically reduces computational costs. This work offers a scalable and efficient alternative for operator learning in scientific computing.
翻译:深度算子网络(DeepONets)是基于算子通用逼近定理的最重要算子学习框架之一。然而,训练DeepONets通常需要大量的计算资源。为了克服这一限制,我们提出了ELM-DeepONets,一种用于DeepONets的极限学习机(ELM)框架,它利用了ELM无需反向传播的特性。通过将DeepONet的训练重新表述为针对新引入参数的最小二乘问题,ELM-DeepONet方法显著降低了训练复杂度。在包括非线性常微分方程和偏微分方程在内的基准问题上的验证表明,所提出的方法不仅实现了更高的精度,而且大幅降低了计算成本。这项工作为科学计算中的算子学习提供了一种可扩展且高效的替代方案。