Generative diffusion models, famous for their performance in image generation, are popular in various cross-domain applications. However, their use in the communication community has been mostly limited to auxiliary tasks like data modeling and feature extraction. These models hold greater promise for fundamental problems in network optimization compared to traditional machine learning methods. Discriminative deep learning often falls short due to its single-step input-output mapping and lack of global awareness of the solution space, especially given the complexity of network optimization's objective functions. In contrast, generative diffusion models can consider a broader range of solutions and exhibit stronger generalization by learning parameters that describe the distribution of the underlying solution space, with higher probabilities assigned to better solutions. We propose a new framework Diffusion Model-based Solution Generation (DiffSG), which leverages the intrinsic distribution learning capabilities of generative diffusion models to learn high-quality solution distributions based on given inputs. The optimal solution within this distribution is highly probable, allowing it to be effectively reached through repeated sampling. We validate the performance of DiffSG on several typical network optimization problems, including mixed-integer non-linear programming, convex optimization, and hierarchical non-convex optimization. Our results demonstrate that DiffSG outperforms existing baseline methods not only on in-domain inputs but also on out-of-domain inputs. In summary, we demonstrate the potential of generative diffusion models in tackling complex network optimization problems and outline a promising path for their broader application in the communication community. Our code is available at https://github.com/qiyu3816/DiffSG.
翻译:生成式扩散模型以其在图像生成领域的卓越性能而闻名,现已在众多跨领域应用中广受欢迎。然而,在通信领域,其应用大多局限于数据建模和特征提取等辅助性任务。相较于传统的机器学习方法,这类模型在网络优化的基础性问题上具有更大的潜力。判别式深度学习因其单步输入-输出映射特性以及对解空间缺乏全局感知能力而常常表现不足,尤其是在网络优化目标函数高度复杂的情况下。相比之下,生成式扩散模型能够考虑更广泛的解,并通过学习描述底层解空间分布的参数来展现出更强的泛化能力,其中更优的解被赋予更高的概率。我们提出了一种新框架——基于扩散模型的解生成(DiffSG),它利用生成式扩散模型固有的分布学习能力,根据给定输入学习高质量的解分布。该分布内的最优解具有很高的概率,因此可以通过重复采样有效地找到。我们在多个典型的网络优化问题上验证了DiffSG的性能,包括混合整数非线性规划、凸优化以及分层非凸优化。我们的结果表明,DiffSG不仅在域内输入上优于现有的基线方法,在域外输入上也同样表现出色。总之,我们展示了生成式扩散模型在处理复杂网络优化问题上的潜力,并为其在通信领域更广泛的应用勾勒出一条前景广阔的道路。我们的代码可在 https://github.com/qiyu3816/DiffSG 获取。