This study explores the combination of automated machine learning (AutoML) with model-based deep unfolding (DU) for optimizing wireless beamforming and waveforms. We convert the iterative proximal gradient descent (PGD) algorithm into a deep neural network, wherein the parameters of each layer are learned instead of being predetermined. Additionally, we enhance the architecture by incorporating a hybrid layer that performs a learnable linear gradient transformation prior to the proximal projection. By utilizing AutoGluon with a tree-structured parzen estimator (TPE) for hyperparameter optimization (HPO) across an expanded search space, which includes network depth, step-size initialization, optimizer, learning rate scheduler, layer type, and post-gradient activation, the proposed auto-unrolled PGD (Auto-PGD) achieves 98.8% of the spectral efficiency of a traditional 200-iteration PGD solver using only five unrolled layers, while requiring only 100 training samples. We also address a gradient normalization issue to ensure consistent performance during training and evaluation, and we illustrate per-layer sum-rate logging as a tool for transparency. These contributions highlight a notable reduction in the amount of training data and inference cost required, while maintaining high interpretability compared to conventional black-box architectures.
翻译:本研究探索了将自动化机器学习(AutoML)与基于模型的深度展开(DU)相结合,以优化无线波束赋形和波形。我们将迭代近端梯度下降(PGD)算法转换为深度神经网络,其中每一层的参数通过学习获得而非预先设定。此外,我们通过引入混合层来增强架构,该层在近端投影之前执行可学习的线性梯度变换。通过使用AutoGluon与树结构Parzen估计器(TPE)在扩展的搜索空间中进行超参数优化(HPO)——包括网络深度、步长初始化、优化器、学习率调度器、层类型及梯度后激活函数——所提出的自动展开PGD(Auto-PGD)仅用五个展开层和100个训练样本,即可达到传统200次迭代PGD求解器98.8%的频谱效率。我们还解决了梯度归一化问题以确保训练和评估期间性能的一致性,并通过逐层和速率记录展示了透明度工具。这些成果显著减少了所需训练数据和推理成本,同时相较于传统黑盒架构保持了更高的可解释性。