In this paper, we address the problem of privacy-preserving hyperparameter (HP) tuning for cross-silo federated learning (FL). We first perform a comprehensive measurement study that benchmarks various HP strategies suitable for FL. Our benchmarks show that the optimal parameters of the FL server, e.g., the learning rate, can be accurately and efficiently tuned based on the HPs found by each client on its local data. We demonstrate that HP averaging is suitable for iid settings, while density-based clustering can uncover the optimal set of parameters in non-iid ones. Then, to prevent information leakage from the exchange of the clients' local HPs, we design and implement PrivTuna, a novel framework for privacy-preserving HP tuning using multiparty homomorphic encryption. We use PrivTuna to implement privacy-preserving federated averaging and density-based clustering, and we experimentally evaluate its performance demonstrating its computation/communication efficiency and its precision in tuning hyperparameters.
翻译:本文研究了跨孤岛联邦学习(FL)中隐私保护超参数(HP)调优的问题。我们首先进行了一项全面的测量研究,对适用于FL的各种超参数策略进行了基准测试。基准测试结果表明,FL服务器的最优参数(例如学习率)可以基于每个客户端在其本地数据上找到的超参数进行准确且高效的调优。我们证明,超参数平均适用于独立同分布(iid)场景,而基于密度的聚类则可揭示非独立同分布(non-iid)场景下的最优参数集。然后,为防止客户端本地超参数交换导致的信息泄露,我们设计并实现了PrivTuna——一种利用多方同态加密进行隐私保护超参数调优的新型框架。我们使用PrivTuna实现了隐私保护的联邦平均和基于密度的聚类,并通过实验评估了其性能,证明了其在计算/通信效率以及超参数调优精度方面的优越性。