We consider online model selection with decentralized data over $M$ clients, and study the necessity of collaboration among clients. Previous work omitted the problem and proposed various federated algorithms, while we provide a comprehensive answer from the perspective of computational constraints. We propose a federated algorithm and analyze the upper and lower bounds on the regret that show (i) collaboration is unnecessary in the absence of additional constraints on the problem; (ii) collaboration is necessary if the computational cost on each client is limited to $o(K)$, where $K$ is the number of candidate hypothesis spaces. We clarify the unnecessary nature of collaboration in previous federated algorithms, and improve the regret bounds of algorithms for distributed online multi-kernel learning at a smaller computational and communication cost. Our algorithm relies on three new techniques including an improved Bernstein's inequality for martingale, a federated online mirror descent framework, and decoupling model selection and predictions, which might be of independent interest.
翻译:我们考虑在$M$个客户端之间进行去中心化数据下的在线模型选择问题,并研究客户端间协作的必要性。现有工作回避了该问题并提出了多种联邦算法,而我们从计算约束的角度给出了全面解答。我们提出一种联邦算法,并分析了其遗憾的上界与下界,结果表明:(i)在问题无额外约束时协作并非必要;(ii)当每个客户端的计算成本限制为$o(K)$($K$为候选假设空间数量)时,协作具有必要性。我们澄清了现有联邦算法中协作的非必要性,并以更小的计算和通信成本改进了分布式在线多核学习算法的遗憾界。我们的算法依赖于三项新技术:改进的鞅型伯恩斯坦不等式、联邦在线镜像下降框架,以及模型选择与预测的解耦,这些技术可能具有独立的研究价值。