Edge intelligence in space-air-ground integrated networks (SAGINs) can enable worldwide network coverage beyond geographical limitations for users to access ubiquitous and low-latency intelligence services. Facing global coverage and complex environments in SAGINs, edge intelligence can provision approximate large language models (LLMs) agents for users via edge servers at ground base stations (BSs) or cloud data centers relayed by satellites. As LLMs with billions of parameters are pre-trained on vast datasets, LLM agents have few-shot learning capabilities, e.g., chain-of-thought (CoT) prompting for complex tasks, which raises a new trade-off between resource consumption and performance in SAGINs. In this paper, we propose a joint caching and inference framework for edge intelligence to provision sustainable and ubiquitous LLM agents in SAGINs. We introduce "cached model-as-a-resource" for offering LLMs with limited context windows and propose a novel optimization framework, i.e., joint model caching and inference, to utilize cached model resources for provisioning LLM agent services along with communication, computing, and storage resources. We design "age of thought" (AoT) considering the CoT prompting of LLMs, and propose a least AoT cached model replacement algorithm for optimizing the provisioning cost. We propose a deep Q-network-based modified second-bid (DQMSB) auction to incentivize network operators, which can enhance allocation efficiency by 23% while guaranteeing strategy-proofness and free from adverse selection.
翻译:天地一体化网络(SAGINs)中的边缘智能能够突破地理限制,实现全球网络覆盖,为用户提供无处不在的低延迟智能服务。面对SAGINs的全球覆盖与复杂环境,边缘智能可通过地面基站(BSs)的边缘服务器或经由卫星中继的云数据中心,为用户部署近似的大语言模型(LLMs)智能体。由于具有数十亿参数的大语言模型在庞大数据集上进行了预训练,LLM智能体具备少样本学习能力,例如针对复杂任务的思维链(CoT)提示,这在SAGINs中引发了资源消耗与性能之间的新权衡。本文提出了一种用于边缘智能的联合缓存与推理框架,以在SAGINs中部署可持续且无处不在的LLM智能体。我们引入"缓存模型即资源"以提供具有有限上下文窗口的LLMs,并提出了一种新颖的优化框架,即联合模型缓存与推理,以利用缓存的模型资源以及通信、计算和存储资源来部署LLM智能体服务。我们设计了考虑LLMs思维链提示的"思维时效"(AoT)指标,并提出了一种最小AoT缓存模型替换算法以优化部署成本。我们提出了一种基于深度Q网络的改进次高价(DQMSB)拍卖机制来激励网络运营商,该机制在保证策略证明性和避免逆向选择的同时,能将分配效率提升23%。