In this paper, we present a novel framework for enhancing the performance of Quanvolutional Neural Networks (QuNNs) by introducing trainable quanvolutional layers and addressing the critical challenges associated with them. Traditional quanvolutional layers, although beneficial for feature extraction, have largely been static, offering limited adaptability. Unlike state-of-the-art, our research overcomes this limitation by enabling training within these layers, significantly increasing the flexibility and potential of QuNNs. However, the introduction of multiple trainable quanvolutional layers induces complexities in gradient-based optimization, primarily due to the difficulty in accessing gradients across these layers. To resolve this, we propose a novel architecture, Residual Quanvolutional Neural Networks (ResQuNNs), leveraging the concept of residual learning, which facilitates the flow of gradients by adding skip connections between layers. By inserting residual blocks between quanvolutional layers, we ensure enhanced gradient access throughout the network, leading to improved training performance. Moreover, we provide empirical evidence on the strategic placement of these residual blocks within QuNNs. Through extensive experimentation, we identify an efficient configuration of residual blocks, which enables gradients across all the layers in the network that eventually results in efficient training. Our findings suggest that the precise location of residual blocks plays a crucial role in maximizing the performance gains in QuNNs. Our results mark a substantial step forward in the evolution of quantum deep learning, offering new avenues for both theoretical development and practical quantum computing applications.
翻译:本文提出了一种新颖的框架,通过引入可训练的量子卷积层并解决与之相关的关键挑战,以提升量子卷积神经网络的性能。传统的量子卷积层虽有利于特征提取,但大多为静态设计,适应性有限。与现有技术不同,我们的研究通过在这些层中实现训练,克服了这一限制,显著增强了量子卷积神经网络的灵活性与潜力。然而,引入多个可训练的量子卷积层会为基于梯度的优化带来复杂性,这主要源于难以跨层获取梯度。为解决此问题,我们提出了一种新颖的架构——残差量子卷积神经网络,其利用残差学习的概念,通过在层间添加跳跃连接来促进梯度流动。通过在量子卷积层之间插入残差块,我们确保了在整个网络中梯度获取的增强,从而提升了训练性能。此外,我们为残差块在量子卷积神经网络中的策略性布局提供了实证依据。通过大量实验,我们确定了一种高效的残差块配置,该配置能使梯度在网络的所有层中流动,最终实现高效的训练。我们的研究结果表明,残差块的精确定位对于最大化量子卷积神经网络的性能增益起着至关重要的作用。这些成果标志着量子深度学习发展中的重要进展,为理论发展和实际量子计算应用开辟了新的途径。