Federated learning enables multiple parties to jointly train learning models without sharing their own underlying data, offering a practical pathway to privacy-preserving collaboration under data-governance constraints. Continued study of federated learning is essential to address key challenges in it, including communication efficiency and privacy protection between parties. A recent line of work introduced a novel approach called the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM), which achieves both objectives simultaneously. CEPAM leverages the rejection-sampled universal quantizer (RSUQ), a randomized vector quantizer whose quantization error is equivalent to a prescribed noise, which can be tuned to customize privacy protection between parties. In this work, we theoretically analyze the privacy guarantees and convergence properties of CEPAM. Moreover, we assess CEPAM's utility performance through experimental evaluations, including convergence profiles compared with other baselines, and accuracy-privacy trade-offs between different parties.
翻译:联邦学习允许多方在不共享各自底层数据的情况下联合训练学习模型,为数据治理约束下的隐私保护协作提供了可行路径。持续研究联邦学习对于解决其中的关键挑战至关重要,包括通信效率与参与方之间的隐私保护。近期研究提出了一种称为通信高效与隐私可调机制(CEPAM)的新方法,可同时实现这两个目标。CEPAM采用拒绝采样通用量化器(RSUQ),这是一种随机向量量化器,其量化误差等价于预设噪声,可通过调整该噪声为参与方之间的隐私保护提供定制化方案。本文从理论上分析了CEPAM的隐私保障与收敛特性。此外,我们通过实验评估考察了CEPAM的效用性能,包括与其他基线方法的收敛曲线对比,以及不同参与方之间的准确率-隐私权衡关系。