Artificial intelligence-generated traffic is changing the shape of wireless networks. Specifically, as the amount of data generated to train machine learning models is massive, network resources must be carefully allocated to continue supporting standard applications. In this paper, we tackle the problem of allocating radio resources for two sets of concurrent devices communicating in uplink with a gateway over the same bandwidth. A set of devices performs federated learning (FL), and accesses the medium in FDMA, uploading periodically large models. The other set is throughput-oriented and accesses the medium via random access (RA), either with ALOHA or slotted-ALOHA protocols. We derive close-to-optimal solutions to the non-convex problem of minimizing the system energy consumption subject to FL latency and RA throughput constraints. Our solutions show that ALOHA can sustain high FL efficiency, yielding up to 48% lower consumption when the system is dominated by FL traffic. On the other hand, slotted-ALOHA becomes more efficient when RA traffic dominates, yielding 6% lower consumption.
翻译:人工智能生成流量正在重塑无线网络的形态。具体而言,由于训练机器学习模型所需的数据量极为庞大,必须精心分配网络资源以持续支持标准应用。本文研究为两组在相同带宽上通过上行链路与网关通信的并发设备分配无线资源的问题。其中一组设备执行联邦学习(FL),采用频分多址(FDMA)方式接入介质,定期上传大型模型;另一组设备以吞吐量为导向,通过随机接入(RA)方式(采用ALOHA或时隙ALOHA协议)接入介质。针对在满足FL延迟和RA吞吐量约束条件下最小化系统能耗的非凸优化问题,我们推导出接近最优的解决方案。研究结果表明:当系统以FL流量为主导时,ALOHA协议能维持较高的FL效率,能耗降低可达48%;而当RA流量占主导时,时隙ALOHA协议更为高效,能耗降低6%。