In this paper, we present a novel framework for enhancing the performance of Quanvolutional Neural Networks (QuNNs) by introducing trainable quanvolutional layers and addressing the critical challenges associated with them. Traditional quanvolutional layers, although beneficial for feature extraction, have largely been static, offering limited adaptability. Unlike state-of-the-art, our research overcomes this limitation by enabling training within these layers, significantly increasing the flexibility and potential of QuNNs. However, the introduction of multiple trainable quanvolutional layers induces complexities in gradient-based optimization, primarily due to the difficulty in accessing gradients across these layers. To resolve this, we propose a novel architecture, Residual Quanvolutional Neural Networks (ResQuNNs), leveraging the concept of residual learning, which facilitates the flow of gradients by adding skip connections between layers. By inserting residual blocks between quanvolutional layers, we ensure enhanced gradient access throughout the network, leading to improved training performance. Moreover, we provide empirical evidence on the strategic placement of these residual blocks within QuNNs. Through extensive experimentation, we identify an efficient configuration of residual blocks, which enables gradients across all the layers in the network that eventually results in efficient training. Our findings suggest that the precise location of residual blocks plays a crucial role in maximizing the performance gains in QuNNs. Our results mark a substantial step forward in the evolution of quantum deep learning, offering new avenues for both theoretical development and practical quantum computing applications.
翻译:本文提出了一种新颖框架,旨在通过引入可训练的量子卷积层并解决其关键挑战,来增强量子卷积神经网络(QuNNs)的性能。传统的量子卷积层虽有利于特征提取,但大多是静态的,适应性有限。与现有技术不同,我们的研究通过在这些层内实现训练过程,克服了这一限制,显著提升了QuNNs的灵活性与潜力。然而,引入多个可训练量子卷积层会导致基于梯度的优化变得复杂,主要原因是跨层梯度的获取存在困难。为解决此问题,我们提出了一种新型架构——残差量子卷积神经网络(ResQuNNs),该架构利用残差学习的概念,通过在层间添加跳跃连接来促进梯度流动。通过在量子卷积层之间插入残差块,我们确保了整个网络中的梯度访问得到增强,从而改善了训练性能。此外,我们提供了关于这些残差块在QuNN中策略性放置的实证证据。通过大量实验,我们识别出一种高效的残差块配置,该配置使得网络中所有层的梯度均可获得,最终实现高效训练。我们的研究结果表明,残差块的精确位置在最大化QuNN性能提升中起着关键作用。这一成果标志着量子深度学习发展的重要一步,为理论探索与实用量子计算应用开辟了新途径。