AI-enabled Radio Access Networks (AI-RANs) are expected to serve heterogeneous users with time-varying learning tasks over shared edge resources. Ensuring equitable inference performance across these users requires adaptive and fair learning mechanisms. This paper introduces an online-within-online fair multi-task learning (OWO-FMTL) framework that ensures long-term equity across users. The method combines two learning loops: an outer loop updating the shared model across rounds and an inner loop rebalancing user priorities within each round with a lightweight primal-dual update. Equity is quantified via generalized alpha-fairness, allowing a trade-off between efficiency and fairness. The framework guarantees diminishing performance disparity over time and operates with low computational overhead suitable for edge deployment. Experiments on convex and deep learning tasks confirm that OWO-FMTL outperforms existing multi-task learning baselines under dynamic scenarios.
翻译:人工智能赋能的无线接入网络(AI-RAN)需在共享的边缘资源上为具有时变学习任务的异构用户提供服务。为确保这些用户间获得公平的推理性能,需要采用自适应且公平的学习机制。本文提出了一种在线嵌套在线公平多任务学习(OWO-FMTL)框架,以确保用户间的长期公平性。该方法结合了两个学习循环:外循环跨轮次更新共享模型,内循环则通过轻量级的原对偶更新在每轮内重新平衡用户优先级。公平性通过广义α-公平性进行量化,从而允许在效率与公平性之间进行权衡。该框架保证了随时间推移性能差异逐渐减小,并以适用于边缘部署的低计算开销运行。在凸优化任务和深度学习任务上的实验证实,在动态场景下OWO-FMTL优于现有的多任务学习基线方法。