Motivated by the increasing demand for data security in decentralized federated learning (FL) and stochastic optimization, we formulate and investigate the problem of information-theoretic \emph{decentralized secure aggregation} (DSA). Specifically, we consider a network of $K$ interconnected users, each holding a private input, representing, for example, local model updates in FL, who aim to simultaneously compute the sum of all inputs while satisfying the security requirement that no user, even when colluding with up to $T$ others, learns anything beyond the intended sum. We characterize the optimal rate region, which specifies the minimum achievable communication and secret key rates for DSA. In particular, we show that to securely compute one bit of the desired input sum, each user must (i) transmit at least one bit to all other users, (ii) hold at least one bit of secret key, and (iii) all users must collectively hold no fewer than $K - 1$ independent key bits. Our result establishes the fundamental performance limits of DSA and offers insights into the design of provably secure and communication-efficient protocols for distributed learning systems.
翻译:暂无翻译