Von Neumann entropy (VNE) is a fundamental quantity in quantum information theory and has recently been adopted in machine learning as a spectral measure of diversity for kernel matrices and kernel covariance operators. While maximizing VNE under constraints is well known in quantum settings, a principled analogue of the classical maximum entropy framework, particularly its decision theoretic and game theoretic interpretation, has not been explicitly developed for VNE in data driven contexts. In this paper, we extend the minimax formulation of the maximum entropy principle due to Grünwald and Dawid to the setting of von Neumann entropy, providing a game-theoretic justification for VNE maximization over density matrices and trace-normalized positive semidefinite operators. This perspective yields a robust interpretation of maximum VNE solutions under partial information and clarifies their role as least committed inferences in spectral domains. We then illustrate how the resulting Maximum VNE principle applies to modern machine learning problems by considering two representative applications, selecting a kernel representation from multiple normalized embeddings via kernel-based VNE maximization, and completing kernel matrices from partially observed entries. These examples demonstrate how the proposed framework offers a unifying information-theoretic foundation for VNE-based methods in kernel learning.
翻译:冯·诺依曼熵(VNE)是量子信息论中的基本量,近年来在机器学习中被用作核矩阵与核协方差算子的谱多样性度量。尽管在量子场景下约束最大化冯·诺依曼熵已是成熟方法,但经典最大熵框架的原理性类比——特别是其决策论与博弈论诠释——在数据驱动场景中尚未得到明确发展。本文基于Grünwald与Dawid提出的最大熵原理极小极大形式,将其扩展至冯·诺依曼熵场景,为密度矩阵与迹归一化半正定算子上的VNE最大化提供了博弈论依据。该视角为部分信息下的最大VNE解赋予了鲁棒性诠释,并阐明了其在谱域中作为最小承诺推断的作用。随后,我们通过两个代表性应用说明所得的最大VNE原理如何适用于现代机器学习问题:基于核的VNE最大化从多个归一化嵌入中选择核表示,以及从部分观测条目补全核矩阵。这些案例表明,所提框架为核学习中基于VNE的方法提供了统一的信息论基础。