We develop a minimax theory for operator learning, where the goal is to estimate an unknown operator between separable Hilbert spaces from finitely many noisy input-output samples. For uniformly bounded Lipschitz operators, we prove information-theoretic lower bounds together with matching or near-matching upper bounds, covering both fixed and random designs under Hilbert-valued Gaussian noise and Gaussian white noise errors. The rates are controlled by the spectrum of the covariance operator of the measure that defines the error metric. Our setup is very general and allows for measures with unbounded support. A key implication is a curse of sample complexity, which shows that the minimax risk for generic Lipschitz operators cannot decay at any algebraic rate in the sample size. We obtain sharp characterizations when the covariance spectrum decays exponentially and provide general upper and lower bounds in slower-decay regimes. Finally, we show that assuming higher regularity, i.e., Hölder smoothness, does not improve minimax rates over the Lipschitz case, up to potential constants. Thus, we show that learning operators of any finite regularity necessarily suffers a curse of sample complexity.
翻译:我们为算子学习建立了一套极小极大理论,其目标是从有限个带噪声的输入-输出样本中估计可分离希尔伯特空间之间的未知算子。对于一致有界的Lipschitz算子,我们证明了信息论下界以及与之匹配或近似匹配的上界,覆盖了希尔伯特值高斯噪声和高斯白噪声误差下的固定设计与随机设计情形。收敛速率由定义误差度量的测度协方差算子的谱所控制。我们的设定非常一般,允许测度具有无界支撑。一个关键推论是样本复杂度的"维数诅咒",表明一般Lipschitz算子的极小极大风险无法以样本量的任何代数速率衰减。当协方差谱呈指数衰减时,我们获得了尖锐的特征刻画;在衰减较慢的情形下,我们给出了通用的上下界。最后,我们证明若假设更高正则性(即Hölder光滑性),相较于Lipschitz情形,其极小极大速率(除可能的常数因子外)不会得到改善。因此,我们表明学习任何有限正则性的算子必然遭受样本复杂度的"维数诅咒"。