This paper investigates compact large language model (LLM) deployment and world-model-assisted inference offloading in mobile edge computing (MEC) networks. We first propose an edge compact LLM deployment (ECLD) framework that jointly applies structured pruning, low-bit quantization, and knowledge distillation to construct edge-deployable LLM variants, and we evaluate these models using four complementary metrics: accessibility, energy consumption, hallucination rate, and generalization accuracy. Building on the resulting compact models, we formulate an MEC offloading optimization problem that minimizes the long-term average inference latency subject to per-device energy budgets and LLM-specific quality-of-service constraints on effective accuracy and hallucination. To solve this problem under unknown and time-varying network dynamics, we develop a world model-proximal policy optimization (PPO) algorithm, which augments an on-policy PPO algorithm with a learned recurrent world model that provides improved value targets and short imagination rollouts. Extensive experiments on Llama-3.1-8B, Qwen3-8B, and Mistral-12B show that ECLD compresses base models by about 70-80% in storage (i.e., from 15.3 GB to 3.3 GB for Llama-3.1-8B) and reduces per-query energy consumption by up to 50%, while largely preserving accuracy and often lowering hallucination compared with quantization-only or pruning-only baselines. Moreover, they also show that world model-PPO speeds up convergence by about 50%, improves the final reward by 15.8% over vanilla PPO, and reduces average inference latency by 12-30% across different user populations, while satisfying the accuracy and hallucination constraints and approaching the generation quality of always-offloading with much of the efficiency of local execution.
翻译:本文研究了移动边缘计算网络中紧凑型大语言模型的部署以及世界模型辅助的推理卸载问题。我们首先提出了一种边缘紧凑型大语言模型部署框架,该框架联合应用结构化剪枝、低位量化和知识蒸馏技术来构建可在边缘部署的大语言模型变体,并使用四个互补指标评估这些模型:可访问性、能耗、幻觉率和泛化准确率。基于所得紧凑模型,我们构建了一个移动边缘计算卸载优化问题,目标是在满足单设备能量预算以及大语言模型在有效准确率和幻觉方面的特定服务质量约束条件下,最小化长期平均推理延迟。为解决未知且时变的网络动态下的该问题,我们开发了一种世界模型-近端策略优化算法,该算法通过一个学习得到的循环世界模型增强同策略近端策略优化算法,该世界模型可提供改进的价值目标和短期想象推演。在Llama-3.1-8B、Qwen3-8B和Mistral-12B上的大量实验表明,边缘紧凑型大语言模型部署框架将基础模型的存储空间压缩了约70-80%(例如,Llama-3.1-8B从15.3 GB压缩至3.3 GB),并将单次查询能耗降低高达50%,同时与仅量化或仅剪枝的基线方法相比,很大程度上保持了准确率,并通常降低了幻觉率。此外,实验还表明,世界模型-近端策略优化算法使收敛速度加快了约50%,最终奖励比原始近端策略优化算法提高了15.8%,并在不同用户群体中将平均推理延迟降低了12-30%,同时满足了准确率和幻觉约束,并接近始终卸载的生成质量,同时保持了大部分本地执行的效率。