The introduction of the new multi-user linearly-separable distributed computing framework, has recently revealed how a parallel treatment of users can yield large parallelization gains with relatively low computation and communication costs. These gains stem from a new approach that converts the computing problem into a sparse matrix factorization problem; a matrix \(\mathbf{F}\) that describes the users' requests, is decomposed as \(\mathbf{F} = \mathbf{DE}\), where a \(γ\)-sparse \(\mathbf{E}\) defines the task allocation across \(N\) servers, and a \(δ\)-sparse \(\mathbf{D}\) defines the connectivity between \(N\) servers and \(K\) users as well as the decoding process. While this approach provides near-optimal performance, its linear nature has raised data secrecy concerns. We adopt an information-theoretic secrecy framework requiring that each user learns nothing more than its own requested function. Our main results provide (i) a necessary condition stating that for each user $k$ observing \(α_k\) server responses, the common randomness visible to that user must span a subspace of dimension greater than \(α_k-1\), and (ii) a necessary and sufficient condition requiring that removing from \(\mathbf{D}\) the columns corresponding to the servers observed by a user leaves a matrix of rank at least \(K-1\). Based on these conditions, we design a general, cost-preserving secrecy-enforcing transformation valid over both finite and real fields, obtained by appending to \(\mathbf{E}\) a basis of \(\mathrm{Null}(\mathbf{D})\) and carefully injecting shared randomness. This scheme preserves communication and computation costs, guarantees perfect information-theoretic secrecy over finite fields, and in the real case yields an explicit mutual-information bound that can be made arbitrarily small by increasing the variance of Gaussian common randomness.
翻译:暂无翻译