We establish the theoretical framework for implementing the maximumn entropy on the mean (MEM) method for linear inverse problems in the setting of approximate (data-driven) priors. We prove a.s. convergence for empirical means and further develop general estimates for the difference between the MEM solutions with different priors $\mu$ and $\nu$ based upon the epigraphical distance between their respective log-moment generating functions. These estimates allow us to establish a rate of convergence in expectation for empirical means. We illustrate our results with denoising on MNIST and Fashion-MNIST data sets.
翻译:本文建立了在近似(数据驱动)先验设置下,应用最大熵均值方法解决线性逆问题的理论框架。我们证明了经验均值的几乎必然收敛性,并基于不同先验分布$\mu$与$\nu$的对数矩生成函数的上图距离,进一步推导了对应最大熵均值解差异的通用估计量。这些估计量使我们能够建立经验均值的期望收敛速率。我们通过在MNIST和Fashion-MNIST数据集上的去噪实验验证了理论结果。