Quantum generative modeling is a very active area of research in looking for practical advantage in data analysis. Quantum generative adversarial networks (QGANs) are leading candidates for quantum generative modeling and have been applied to diverse areas, from high-energy physics to image generation. The latent style-based QGAN, relying on a classical variational autoencoder to encode the input data into a latent space and then using a style-based QGAN for data generation has been proven to be efficient for image generation or drug design, hinting at the use of far less trainable parameters than their classical counterpart to achieve comparable performance, however this advantage has never been systematically studied. We present in this work the first comprehensive experimental analysis of this advantage of QGANS applied to SAT4 image generation, obtaining an exponential advantage in capacity scaling for a quantum generator in the hybrid latent style-based QGAN architecture. Careful tuning of the autoencoder is crucial to obtain stable, reliable results. Once this tuning is performed and defining training optimality as when the training is stable and the FID score is low and stable as well, the optimal capacity (or number of trainable parameters) of the classical discriminator scales exponentially with respect to the capacity of the quantum generator, and the same is true for the capacity of the classical generator. This hints toward a type of quantum advantage for quantum generative modeling.
翻译:量子生成建模是数据分析和用研究中一个非常活跃的领域,旨在寻找实际应用优势。量子生成对抗网络(QGANs)是量子生成建模的主要候选方案,已应用于从高能物理到图像生成等多个领域。潜在风格QGAN依赖于经典变分自编码器将输入数据编码到潜在空间,然后使用基于风格的QGAN进行数据生成,已被证明在图像生成或药物设计方面具有高效性,暗示其使用远少于经典对应模型的训练参数即可达到相当性能,然而这一优势从未得到系统研究。本文首次对QGAN应用于SAT4图像生成的这一优势进行了全面的实验分析,在混合潜在风格QGAN架构中获得了量子生成器容量缩放的指数级优势。自编码器的精细调优对获得稳定可靠的结果至关重要。一旦完成调优并将训练最优性定义为训练过程稳定且FID分数较低且同样稳定时,经典判别器的最优容量(或可训练参数数量)相对于量子生成器的容量呈指数级缩放,经典生成器的容量缩放规律亦如此。这暗示了量子生成建模可能具有某种量子优势。