Deep Operator Networks (DeepONets) provide a branch-trunk neural architecture for approximating nonlinear operators acting between function spaces. In the classical operator approximation framework, the input is a function $u\in C(K_1)$ defined on a compact set $K_1$ (typically a compact subset of a Banach space), and the operator maps $u$ to an output function $G(u)\in C(K_2)$ defined on a compact Euclidean domain $K_2\subset\mathbb{R}^d$. In this paper, we develop a topological extension in which the operator input lies in an arbitrary Hausdorff locally convex space $X$. We construct topological feedforward neural networks on $X$ using continuous linear functionals from the dual space $X^*$ and introduce topological DeepONets whose branch component acts on $X$ through such linear measurements, while the trunk component acts on the Euclidean output domain. Our main theorem shows that continuous operators $G:V\to C(K;\mathbb{R}^m)$, where $V\subset X$ and $K\subset\mathbb{R}^d$ are compact, can be uniformly approximated by such topological DeepONets. This extends the classical Chen-Chen operator approximation theorem from spaces of continuous functions to locally convex spaces and yields a branch-trunk approximation theorem beyond the Banach-space setting.
翻译:深度算子网络(DeepONets)通过分支-主干神经架构为函数空间之间的非线性算子逼近提供了一种方法。在经典算子逼近框架中,输入是定义在紧集$K_1$上的函数$u\in C(K_1)$(通常为巴拿赫空间的紧子集),算子将$u$映射到定义在紧欧几里得区域$K_2\subset\mathbb{R}^d$上的输出函数$G(u)\in C(K_2)$。本文提出了一种拓扑扩展,其中算子输入位于任意豪斯多夫局部凸空间$X$中。我们利用对偶空间$X^*$中的连续线性泛函构建$X$上的拓扑前馈神经网络,并引入拓扑DeepONets:其分支组件通过此类线性测量作用于$X$,而主干组件作用于欧几里得输出域。我们的主要定理证明,对于紧子集$V\subset X$和$K\subset\mathbb{R}^d$,连续算子$G:V\to C(K;\mathbb{R}^m)$可由此类拓扑DeepONets一致逼近。该结果将经典陈-陈算子逼近定理从连续函数空间推广至局部凸空间,从而在巴拿赫空间框架之外建立了分支-主干逼近定理。