Tensor network machine learning models have shown remarkable versatility in tackling complex data-driven tasks, ranging from quantum many-body problems to classical pattern recognitions. Despite their promising performance, a comprehensive understanding of the underlying assumptions and limitations of these models is still lacking. In this work, we focus on the rigorous formulation of their no-free-lunch theorem -- essential yet notoriously challenging to formalize for specific tensor network machine learning models. In particular, we rigorously analyze the generalization risks of learning target output functions from input data encoded in tensor network states. We first prove a no-free-lunch theorem for machine learning models based on matrix product states, i.e., the one-dimensional tensor network states. Furthermore, we circumvent the challenging issue of calculating the partition function for two-dimensional Ising model, and prove the no-free-lunch theorem for the case of two-dimensional projected entangled-pair state, by introducing the combinatorial method associated to the "puzzle of polyominoes". Our findings reveal the intrinsic limitations of tensor network-based learning models in a rigorous fashion, and open up an avenue for future analytical exploration of both the strengths and limitations of quantum-inspired machine learning frameworks.
翻译:张量网络机器学习模型在处理复杂数据驱动任务方面展现出卓越的通用性,其应用范围涵盖量子多体问题至经典模式识别。尽管这些模型表现出良好的性能,但对其基本假设与局限性的全面理解仍显不足。本研究聚焦于为特定张量网络机器学习模型严格构建其无免费午餐定理——该定理至关重要,但形式化表述极具挑战性。我们特别针对从张量网络态编码的输入数据中学习目标输出函数的泛化风险进行了严格分析。首先,我们证明了基于矩阵乘积态(即一维张量网络态)的机器学习模型的无免费午餐定理。进一步地,通过引入与"多联骨牌拼图"相关的组合方法,我们规避了计算二维伊辛模型配分函数的难题,证明了二维投影纠缠对态情形下的无免费午餐定理。我们的研究结果以严格方式揭示了基于张量网络的学习模型的内在局限性,并为未来从解析角度探索量子启发的机器学习框架的优势与局限开辟了新路径。