Operator learning based on neural operators has emerged as a promising paradigm for the data-driven approximation of operators, mapping between infinite-dimensional Banach spaces. Despite significant empirical progress, our theoretical understanding regarding the efficiency of these approximations remains incomplete. This work addresses the parametric complexity of neural operator approximations for the general class of Lipschitz continuous operators. Motivated by recent findings on the limitations of specific architectures, termed curse of parametric complexity, we here adopt an information-theoretic perspective. Our main contribution establishes lower bounds on the metric entropy of Lipschitz operators in two approximation settings; uniform approximation over a compact set of input functions, and approximation in expectation, with input functions drawn from a probability measure. It is shown that these entropy bounds imply that, regardless of the activation function used, neural operator architectures attaining an approximation accuracy $\epsilon$ must have a size that is exponentially large in $\epsilon^{-1}$. The size of architectures is here measured by counting the number of encoded bits necessary to store the given model in computational memory. The results of this work elucidate fundamental trade-offs and limitations in operator learning.
翻译:基于神经算子的算子学习已成为数据驱动逼近算子(映射于无限维Banach空间之间)的一种前景广阔的研究范式。尽管在实证层面取得了显著进展,我们对于这类逼近方法效率的理论理解仍不完善。本研究针对一般Lipschitz连续算子类的神经算子逼近参数复杂度问题展开研究。受近期关于特定架构局限性(称为参数复杂度灾难)发现的启发,本文采用信息论的研究视角。我们的主要贡献在于建立了Lipschitz算子在两种逼近场景下的度量熵下界:在输入函数紧集上的一致逼近,以及输入函数服从概率测度分布的期望逼近。研究证明这些熵界意味着,无论使用何种激活函数,要达到$\epsilon$逼近精度的神经算子架构,其规模必须随$\epsilon^{-1}$呈指数级增长。本文通过计算存储给定模型所需编码比特数来衡量架构规模。本研究结果揭示了算子学习中根本性的权衡关系与理论局限。