Quantum Extreme Learning Machines (QELMs) have emerged as a promising framework for quantum machine learning. Their appeal lies in the rich feature map induced by the dynamics of a quantum substrate - the quantum reservoir - and the efficient post-measurement training via linear regression. Here we study the expressivity of QELMs by decomposing the prediction of QELMs into a Fourier series. We show that the achievable Fourier frequencies are determined by the data encoding scheme, while Fourier coefficients depend on both the reservoir and the measurement. Notably, the expressivity of QELMs is fundamentally limited by the number of Fourier frequencies and the number of observables, while the complexity of the prediction hinges on the reservoir. As a cautionary note on scalability, we identify four sources that can lead to the exponential concentration of the observables as the system size grows (randomness, hardware noise, entanglement, and global measurements) and show how this can turn QELMs into useless input-agnostic oracles. In particular, our result on the reservoir-induced concentration strongly indicates that quantum reservoirs drawn from a highly random ensemble make QELM models unscalable. Our analysis elucidates the potential and fundamental limitations of QELMs, and lays the groundwork for systematically exploring quantum reservoir systems for other machine learning tasks.
翻译:量子极限学习机(QELMs)已成为量子机器学习领域中一个颇具前景的框架。其吸引力源于量子基底(即量子储层)动力学所诱导的丰富特征映射,以及通过线性回归实现的高效测量后训练。本文通过将QELMs的预测分解为傅里叶级数来研究其表达能力。我们证明,可实现的傅里叶频率由数据编码方案决定,而傅里叶系数则同时依赖于储层和测量方式。值得注意的是,QELMs的表达能力从根本上受限于傅里叶频率的数量和可观测量数目,而预测的复杂度则取决于储层特性。针对可扩展性问题,我们指出了随着系统规模增大可能导致可观测量指数级集中的四个来源(随机性、硬件噪声、纠缠和全局测量),并论证了这种集中如何可能使QELMs退化为无用的输入无关预言机。特别需要指出的是,我们关于储层诱导集中的结果强有力地表明:从高度随机系综中提取的量子储层将导致QELM模型不具备可扩展性。本分析阐明了QELMs的潜力与根本局限,并为系统探索量子储层系统在其他机器学习任务中的应用奠定了理论基础。