We introduce the (Wishart) projection mechanism, a randomized map of the form $S \mapsto M f(S)$ with $M \sim W_d(1/r I_d, r)$ and study its differential privacy properties. For vector-valued queries $f$, we prove non-asymptotic DP guarantees without any additive noise, showing that Wishart randomness alone can suffice. For matrix-valued queries, however, we establish a sharp negative result: in the noise-free setting, the mechanism is not DP, and we demonstrate its vulnerability by implementing a near perfect membership inference attack (AUC $> 0.99$). We then analyze a noisy variant and prove privacy amplification due to randomness and low rank projection, in both large- and small-rank regimes, yielding stronger privacy guarantees than additive noise alone. Finally, we show that LoRA-style updates are an instance of the matrix-valued mechanism, implying that LoRA is not inherently private despite its built-in randomness, but that low-rank fine-tuning can be more private than full fine-tuning at the same noise level. Preliminary experiments suggest that tighter accounting enables lower noise and improved accuracy in practice.
翻译:我们引入(Wishart)投影机制,这是一种形式为$S \mapsto M f(S)$的随机映射,其中$M \sim W_d(1/r I_d, r)$,并研究其差分隐私特性。对于向量值查询$f$,我们证明了无需任何加性噪声的非渐近DP保证,表明仅Wishart随机性即可满足隐私需求。然而对于矩阵值查询,我们建立了明确的负面结论:在无噪声设置下,该机制不具备DP性质,并通过实现接近完美的成员推理攻击(AUC $> 0.99$)验证其脆弱性。随后我们分析含噪声变体,在大秩与小秩两种情形下证明了随机性与低秩投影带来的隐私放大效应,其隐私保障强于单独使用加性噪声。最后,我们指出LoRA式更新是矩阵值机制的一个实例,这意味着尽管LoRA具有内置随机性,但其本身并不具备固有隐私性;不过与相同噪声水平下的全参数微调相比,低秩微调可获得更强隐私保护。初步实验表明,更严格的隐私核算机制在实践中能够实现更低噪声与更高精度。