We propose the first algorithms with non-asymptotic convergence guarantees for computing the Petz-Augustin capacity, which generalizes the channel capacity and characterizes the optimal error exponent in classical-quantum channel coding. This capacity can be equivalently expressed as the maximization of two generalizations of mutual information: the Petz-Rényi information and the Petz-Augustin information. To maximize the Petz-Rényi information, we show that it corresponds to a convex Hölder-smooth optimization problem, and hence the universal fast gradient method of Nesterov (2015), along with its convergence guarantees, readily applies. Regarding the maximization of the Petz-Augustin information, we adopt a two-layered approach: we show that the objective function is smooth relative to the negative Shannon entropy and can be efficiently optimized by entropic mirror descent; each iteration of entropic mirror descent requires computing the Petz-Augustin information, for which we propose a novel fixed-point algorithm and establish its contractivity with respect to the Thompson metric. Notably, this two-layered approach can be viewed as a generalization of the mirror-descent interpretation of the Blahut-Arimoto algorithm due to He et al. (2024).
翻译:我们提出了首个具有非渐进收敛保证的算法,用于计算佩茨-奥古斯丁容量,该容量推广了信道容量并刻画了经典-量子信道编码中的最优误差指数。该容量可等价地表示为两种互信息推广形式的最大化:佩茨-瑞尼信息与佩茨-奥古斯丁信息。对于佩茨-瑞尼信息的最大化,我们证明其对应于一个凸的赫尔德光滑优化问题,因此Nesterov(2015)的通用快速梯度方法及其收敛保证可直接应用。关于佩茨-奥古斯丁信息的最大化,我们采用双层优化方法:我们证明目标函数相对于负香农熵是光滑的,可通过熵镜像下降法高效优化;熵镜像下降的每次迭代需要计算佩茨-奥古斯丁信息,为此我们提出了一种新颖的不动点算法,并建立了其在汤普森度量下的压缩性。值得注意的是,该双层方法可视为He等人(2024)基于Blahut-Arimoto算法的镜像下降解释的一种推广。