We present COALA, a vision-centric Federated Learning (FL) platform, and a suite of benchmarks for practical FL scenarios, which we categorize into three levels: task, data, and model. At the task level, COALA extends support from simple classification to 15 computer vision tasks, including object detection, segmentation, pose estimation, and more. It also facilitates federated multiple-task learning, allowing clients to tackle multiple tasks simultaneously. At the data level, COALA goes beyond supervised FL to benchmark both semi-supervised FL and unsupervised FL. It also benchmarks feature distribution shifts other than commonly considered label distribution shifts. In addition to dealing with static data, it supports federated continual learning for continuously changing data in real-world scenarios. At the model level, COALA benchmarks FL with split models and different models in different clients. COALA platform offers three degrees of customization for these practical FL scenarios, including configuration customization, components customization, and workflow customization. We conduct systematic benchmarking experiments for the practical FL scenarios and highlight potential opportunities for further advancements in FL. Codes are open sourced at https://github.com/SonyResearch/COALA.
翻译:我们提出了COALA,一个以视觉为中心的联邦学习平台,以及一套面向实用联邦学习场景的基准测试集。我们将这些场景划分为三个层次:任务、数据和模型。在任务层面,COALA将支持范围从简单的分类扩展到15种计算机视觉任务,包括目标检测、分割、姿态估计等。它还支持联邦多任务学习,允许客户端同时处理多个任务。在数据层面,COALA超越了有监督联邦学习,为半监督联邦学习和无监督联邦学习提供了基准测试。除了通常考虑的标签分布偏移,它还针对特征分布偏移进行了基准测试。除了处理静态数据,它还支持联邦持续学习,以应对现实场景中持续变化的数据。在模型层面,COALA对使用分割模型以及不同客户端使用不同模型的联邦学习进行了基准测试。COALA平台为这些实用的联邦学习场景提供了三种程度的定制化支持,包括配置定制、组件定制和工作流定制。我们对这些实用联邦学习场景进行了系统的基准实验,并指出了联邦学习未来发展的潜在机遇。代码已在 https://github.com/SonyResearch/COALA 开源。