Classical consensus-based strategies for federated and decentralized learning are statistically suboptimal in the presence of heterogeneous local data or task distributions. As a result, in recent years, there has been growing interest in multitask or personalized strategies, which allow individual agents to benefit from one another in pursuing locally optimal models without enforcing consensus. Existing strategies require either precise prior knowledge of the underlying task relationships or are fully non-parametric and instead rely on meta-learning or proximal constructions. In this work, we introduce an algorithmic framework that strikes a balance between these extremes. By modeling task relationships through a Gaussian Markov Random Field with an unknown precision matrix, we develop a strategy that jointly learns both the task relationships and the local models, allowing agents to self-organize in a way consistent with their individual data distributions. Our theoretical analysis quantifies the quality of the learned relationship, and our numerical experiments demonstrate its practical effectiveness.
翻译:传统的基于共识的联邦学习和去中心化学习策略在存在异构本地数据或任务分布时统计上并非最优。因此,近年来,多任务或个性化策略日益受到关注,这些策略允许个体智能体在追求局部最优模型时相互受益,而无需强制达成共识。现有策略要么需要关于底层任务关系的精确先验知识,要么完全是非参数的,转而依赖元学习或近端构造。在本工作中,我们引入了一个平衡这两种极端情况的算法框架。通过使用具有未知精度矩阵的高斯马尔可夫随机场对任务关系进行建模,我们开发了一种策略,能够联合学习任务关系和局部模型,使智能体能够以与其各自数据分布一致的方式进行自组织。我们的理论分析量化了所学关系的质量,数值实验则证明了其实际有效性。