Mobile edge computing (MEC) based wireless metaverse services offer an untethered, immersive experience to users, where the superior quality of experience (QoE) needs to be achieved under stringent latency constraints and visual quality demands. To achieve this, MEC-based intelligent resource allocation for virtual reality users needs to be supported by coordination across MEC servers to harness distributed data. Federated learning (FL) is a promising solution, and can be combined with reinforcement learning (RL) to develop generalized policies across MEC-servers. However, conventional FL incurs transmitting the full model parameters across the MEC-servers and the cloud, and suffer performance degradation due to naive global aggregation, especially in heterogeneous multi-radio access technology environments. To address these challenges, this paper proposes Federated Split Decision Transformer (FSDT), an offline RL framework where the transformer model is partitioned between MEC servers and the cloud. Agent-specific components (e.g., MEC-based embedding and prediction layers) enable local adaptability, while shared global layers in the cloud facilitate cooperative training across MEC servers. Experimental results demonstrate that FSDT enhances QoE for up to 10% in heterogeneous environments compared to baselines, while offloadingnearly 98% of the transformer model parameters to the cloud, thereby reducing the computational burden on MEC servers.
翻译:基于移动边缘计算(MEC)的无线元宇宙服务为用户提供了无束缚的沉浸式体验,其中卓越的体验质量(QoE)需要在严格的延迟约束和视觉质量要求下实现。为此,面向虚拟现实用户的MEC智能资源分配需要跨MEC服务器的协调,以利用分布式数据。联邦学习(FL)是一种有前景的解决方案,并且可以与强化学习(RL)结合,以开发跨MEC服务器的通用策略。然而,传统FL需要在MEC服务器与云端之间传输完整的模型参数,并且由于简单的全局聚合而遭受性能下降,尤其是在异构多无线接入技术环境中。为应对这些挑战,本文提出联邦拆分决策Transformer(FSDT),这是一种离线RL框架,其中Transformer模型在MEC服务器与云端之间进行划分。特定于智能体的组件(例如,基于MEC的嵌入层和预测层)实现了局部适应性,而云端共享的全局层则促进了跨MEC服务器的协同训练。实验结果表明,在异构环境中,与基线方法相比,FSDT将QoE提升了高达10%,同时将接近98%的Transformer模型参数卸载到云端,从而减轻了MEC服务器的计算负担。