Federated learning (FL) enables collaborative learning across multiple clients. In most FL work, all clients train a single learning task. However, the recent proliferation of FL applications may increasingly require multiple FL tasks to be trained simultaneously, sharing clients' computing and communication resources, which we call Multiple-Model Federated Learning (MMFL). Current MMFL algorithms use naive average-based client-task allocation schemes that can lead to unfair performance when FL tasks have heterogeneous difficulty levels, e.g., tasks with larger models may need more rounds and data to train. Just as naively allocating resources to generic computing jobs with heterogeneous resource needs can lead to unfair outcomes, naive allocation of clients to FL tasks can lead to unfairness, with some tasks having excessively long training times, or lower converged accuracies. Furthermore, in the FL setting, since clients are typically not paid for their training effort, we face a further challenge that some clients may not even be willing to train some tasks, e.g., due to high computational costs, which may exacerbate unfairness in training outcomes across tasks. We address both challenges by firstly designing FedFairMMFL, a difficulty-aware algorithm that dynamically allocates clients to tasks in each training round. We provide guarantees on airness and FedFairMMFL's convergence rate. We then propose a novel auction design that incentivizes clients to train multiple tasks, so as to fairly distribute clients' training efforts across the tasks. We show how our fairness-based learning and incentive mechanisms impact training convergence and finally evaluate our algorithm with multiple sets of learning tasks on real world datasets.
翻译:联邦学习(FL)支持跨多个客户端的协同学习。现有大多数FL研究中,所有客户端仅训练单一学习任务。然而,随着FL应用的快速普及,亟需同时训练多个FL任务,并共享客户端的计算与通信资源——我们将此类场景称为多模型联邦学习(MMFL)。当前MMFL算法采用基于简单平均的客户端-任务分配方案,当各FL任务存在异构难度时(例如,大模型任务需要更多轮次和数据训练),会导致训练性能不公平。正如对异构资源需求的通用计算任务进行朴素资源分配会引发不公,对FL任务进行朴素的客户端分配也会导致不公平现象,表现为某些任务出现过长的训练时间或更低的收敛精度。此外,在FL场景中,由于客户端通常不会为训练付出获得报酬,我们面临进一步挑战:部分客户端可能因计算成本过高等原因拒绝训练某些任务,这可能加剧不同任务间训练结果的不公平。为应对这两项挑战,我们首先设计了难度感知算法FedFairMMFL,该算法在每轮训练中动态分配客户端至任务,并提供了公平性保障与收敛速率分析。进而提出新型拍卖机制以激励客户端训练多个任务,从而公平分配跨任务的客户端训练努力。我们论证了基于公平性的学习与激励机制对训练收敛的影响,最终在真实数据集上通过多组学习任务验证了算法有效性。