Generative AI (GenAI) has emerged as a transformative technology, enabling customized and personalized AI-generated content (AIGC) services. In this paper, we address challenges of edge-enabled AIGC service provisioning, which remain underexplored in the literature. These services require executing GenAI models with billions of parameters, posing significant obstacles to resource-limited wireless edge. We subsequently introduce the formulation of joint model caching and resource allocation for AIGC services to balance a trade-off between AIGC quality and latency metrics. We obtain mathematical relationships of these metrics with the computational resources required by GenAI models via experimentation. Afterward, we decompose the formulation into a model caching subproblem on a long-timescale and a resource allocation subproblem on a short-timescale. Since the variables to be solved are discrete and continuous, respectively, we leverage a double deep Q-network (DDQN) algorithm to solve the former subproblem and propose a diffusion-based deep deterministic policy gradient (D3PG) algorithm to solve the latter. The proposed D3PG algorithm makes an innovative use of diffusion models as the actor network to determine optimal resource allocation decisions. Consequently, we integrate these two learning methods within the overarching two-timescale deep reinforcement learning (T2DRL) algorithm, the performance of which is studied through comparative numerical simulations.
翻译:生成式人工智能(GenAI)已成为一项变革性技术,能够提供定制化和个性化的AI生成内容(AIGC)服务。本文针对边缘环境下的AIGC服务供给所面临的挑战展开研究,该问题在现有文献中尚未得到充分探索。此类服务需要执行具有数十亿参数的GenAI模型,对资源受限的无线边缘构成了重大障碍。为此,我们提出了面向AIGC服务的联合模型缓存与资源分配问题建模,以平衡AIGC质量与延迟指标之间的权衡关系。通过实验,我们获得了这些指标与GenAI模型所需计算资源之间的数学关系。随后,我们将该问题分解为长时尺度上的模型缓存子问题与短时尺度上的资源分配子问题。鉴于待求解变量分别为离散型与连续型,我们采用双深度Q网络(DDQN)算法求解前者,并提出基于扩散模型的深度确定性策略梯度(D3PG)算法求解后者。所提出的D3PG算法创新性地利用扩散模型作为执行器网络来确定最优资源分配决策。最终,我们将这两种学习方法整合于双时间尺度深度强化学习(T2DRL)总体框架中,并通过对比数值仿真验证了其性能表现。