With rapid advancements in large language models (LLMs), AI-generated content (AIGC) has emerged as a key driver of technological innovation and economic transformation. Personalizing AIGC services to meet individual user demands is essential but challenging for AIGC service providers (ASPs) due to the subjective and complex demands of mobile users (MUs), as well as the computational and communication resource constraints faced by ASPs. To tackle these challenges, we first develop a novel multi-dimensional quality-of-experience (QoE) metric. This metric comprehensively evaluates AIGC services by integrating accuracy, token count, and timeliness. We focus on a mobile edge computing (MEC)-enabled AIGC network, consisting of multiple ASPs deploying differentiated AIGC models on edge servers and multiple MUs with heterogeneous QoE requirements requesting AIGC services from ASPs. To incentivize ASPs to provide personalized AIGC services under MEC resource constraints, we propose a QoE-driven incentive mechanism. We formulate the problem as an equilibrium problem with equilibrium constraints (EPEC), where MUs as leaders determine rewards, while ASPs as followers optimize resource allocation. To solve this, we develop a dual-perturbation reward optimization algorithm, reducing the implementation complexity of adaptive pricing. Experimental results demonstrate that our proposed mechanism achieves a reduction of approximately $64.9\%$ in average computational and communication overhead, while the average service cost for MUs and the resource consumption of ASPs decrease by $66.5\%$ and $76.8\%$, respectively, compared to state-of-the-art benchmarks.
翻译:随着大语言模型(LLM)的快速发展,人工智能生成内容(AIGC)已成为技术革新与经济转型的关键驱动力。为满足个体用户需求而对AIGC服务进行个性化定制至关重要,但由于移动用户(MU)需求的主观性与复杂性,以及AIGC服务提供商(ASP)所面临的计算与通信资源限制,这对ASP而言具有挑战性。为应对这些挑战,我们首先提出一种新颖的多维服务质量体验(QoE)度量标准。该标准通过整合准确性、令牌数量与时效性,全面评估AIGC服务。我们聚焦于一个支持移动边缘计算(MEC)的AIGC网络,该网络由多个在边缘服务器上部署差异化AIGC模型的ASP,以及多个具有异构QoE需求并向ASP请求AIGC服务的MU组成。为激励ASP在MEC资源约束下提供个性化AIGC服务,我们提出一种QoE驱动的激励机制。我们将该问题建模为一个带均衡约束的均衡问题(EPEC),其中MU作为领导者决定报酬,而ASP作为跟随者优化资源分配。为解决此问题,我们开发了一种双扰动报酬优化算法,降低了自适应定价的实现复杂度。实验结果表明,与现有先进基准相比,我们提出的机制实现了平均计算与通信开销降低约$64.9\%$,同时MU的平均服务成本与ASP的资源消耗分别降低了$66.5\%$和$76.8\%$。