Generating quantum data by learning the underlying quantum distribution poses challenges in both theoretical and practical scenarios, yet it is a critical task for understanding quantum systems. A fundamental question in quantum machine learning (QML) is the universality of approximation: whether a parameterized QML model can approximate any quantum distribution. We address this question by proving a universality theorem for the Many-body Projected Ensemble (MPE) framework, a method for quantum state design that uses a single many-body wave function to prepare random states. This demonstrates that MPE can approximate any distribution of pure states within a 1-Wasserstein distance error. This theorem provides a rigorous guarantee of universal expressivity, addressing key theoretical gaps in QML. For practicality, we propose an Incremental MPE variant with layer-wise training to improve the trainability. Numerical experiments on clustered quantum states and quantum chemistry datasets validate MPE's efficacy in learning complex quantum data distributions.
翻译:通过学习底层量子分布来生成量子数据,在理论和实践场景中都面临挑战,但这对于理解量子系统而言是一项关键任务。量子机器学习中的一个基本问题是近似能力的普适性:即参数化的量子机器学习模型能否逼近任何量子分布。我们通过证明多体投影系综框架的一个普适性定理来回答这个问题,该框架是一种利用单个多体波函数来制备随机态的量子态设计方法。该定理表明,MPE 能够以 1-Wasserstein 距离误差逼近任何纯态分布。这为量子机器学习提供了严格的普适表达能力保证,弥补了其关键的理论空白。在实用性方面,我们提出了一种具有逐层训练功能的增量式 MPE 变体,以提升其可训练性。在团簇量子态和量子化学数据集上的数值实验验证了 MPE 在学习复杂量子数据分布方面的有效性。