Multi-agent learning faces a fundamental tension: leveraging distributed collaboration without sacrificing the personalization needed for diverse agents. This tension intensifies when aiming for full personalization while adapting to unknown heterogeneity levels -- gaining collaborative speedup when agents are similar, without performance degradation when they are different. Embracing the challenge, we propose personalized collaborative learning (PCL), a novel framework for heterogeneous agents to collaboratively learn personalized solutions with seamless adaptivity. Through carefully designed bias correction and importance correction mechanisms, our method AffPCL robustly handles both environment and objective heterogeneity. We prove that AffPCL reduces sample complexity over independent learning by a factor of $\max\{n^{-1}, \delta\}$, where $n$ is the number of agents and $\delta\in[0,1]$ measures their heterogeneity. This affinity-based acceleration automatically interpolates between the linear speedup of federated learning in homogeneous settings and the baseline of independent learning, without requiring prior knowledge of the system. Our analysis further reveals that an agent may obtain linear speedup even by collaborating with arbitrarily dissimilar agents, unveiling new insights into personalization and collaboration in the high heterogeneity regime.
翻译:多智能体学习面临一个根本性矛盾:既要利用分布式协作,又不能牺牲不同智能体所需的个性化。当追求完全个性化同时适应未知的异质性水平时,这一矛盾尤为突出——需要在智能体相似时获得协作加速,而在智能体不同时避免性能下降。为应对这一挑战,我们提出个性化协同学习(PCL),这是一个新颖的框架,使异质智能体能够通过无缝自适应协作学习个性化解决方案。通过精心设计的偏差校正和重要性校正机制,我们的方法AffPCL能够稳健地处理环境异质性和目标异质性。我们证明,相较于独立学习,AffPCL将样本复杂度降低了$\max\{n^{-1}, \delta\}$倍,其中$n$为智能体数量,$\delta\in[0,1]$衡量其异质性程度。这种基于亲和力的加速机制能够自动在联邦学习在同质环境下的线性加速与独立学习的基线之间进行插值,无需系统先验知识。我们的分析进一步揭示,即使与任意不相似的智能体协作,单个智能体仍可能获得线性加速,这为高异质性机制下的个性化与协作提供了新的见解。