Motivated by the rapidly growing field of mathematics for operator approximation with neural networks, we present a novel universal operator approximation theorem for a broad class of encoder-decoder architectures. In this study, we focus on approximating continuous operators in $\mathcal{C}(\mathcal{X}, \mathcal{Y})$, where $\mathcal{X}$ and $\mathcal{Y}$ are infinite-dimensional normed or metric spaces, and we consider uniform convergence on compact subsets of $\mathcal{X}$. Unlike standard results in the operator learning literature, we investigate the case where the approximating operator sequence can be chosen independently of the compact sets. Taking a topological perspective, we analyze different types of operator approximation and show that compact-set-independent approximation is a strictly stronger property in most relevant operator learning frameworks. To establish our results, we introduce a new approximation property tailored to encoder-decoder architectures, which enables us to prove a universal operator approximation theorem ensuring uniform convergence on every compact subset. This result unifies and extends existing universal operator approximation theorems for various encoder-decoder architectures, including classical DeepONets, BasisONets, special cases of MIONets, architectures based on frames and other related approaches.
翻译:受算子神经网络逼近这一快速发展数学领域的启发,本文针对一大类编码器-解码器架构提出了一种新颖的通用算子逼近定理。本研究聚焦于逼近$\mathcal{C}(\mathcal{X}, \mathcal{Y})$中的连续算子,其中$\mathcal{X}$和$\mathcal{Y}$为无限维赋范空间或度量空间,并考虑在$\mathcal{X}$的紧子集上的一致收敛性。与算子学习文献中的标准结果不同,我们探讨了逼近算子序列可独立于紧集选取的情形。通过采用拓扑学视角,我们分析了不同类型的算子逼近,并证明在大多数相关算子学习框架中,紧集无关逼近是严格更强的性质。为建立我们的结果,我们引入了一种专为编码器-解码器架构设计的新逼近性质,据此证明了能保证在每个紧子集上一致收敛的通用算子逼近定理。该结果统一并扩展了现有各类编码器-解码器架构的通用算子逼近定理,包括经典DeepONets、BasisONets、MIONets的特殊情形、基于框架的架构及其他相关方法。