Local Differential Privacy (LDP) is a widely adopted privacy-protection model in the Internet of Things (IoT) due to its lightweight, decentralized, and scalable nature. However, it is vulnerable to poisoning attacks, and existing defenses either incur prohibitive resource overheads or rely on domain-specific prior knowledge, limiting their practical deployment. To address these limitations, we propose PEEL, a Poisoning-Exposing Encoding theoretical framework for LDP, which departs from resource- or prior-dependent countermeasures and instead leverages the inherent structural consistency of LDP-perturbed data. As a non-intrusive post-processing module, PEEL amplifies stealthy poisoning effects by re-encoding LDP-perturbed data via sparsification, normalization, and low-rank projection, thereby revealing both output and rule poisoning attacks through structural inconsistencies in the reconstructed space. Theoretical analysis proves that PEEL, integrated with LDP, retains unbiasedness and statistical accuracy, while being robust to expose both output and rule poisoning attacks. Moreover, evaluation results show that LDP-integrated PEEL not only outperforms four state-of-the-art defenses in terms of poisoning exposure accuracy but also significantly reduces client-side computational costs, making it highly suitable for large-scale IoT deployments.
翻译:本地差分隐私(LDP)因其轻量级、去中心化和可扩展的特性,在物联网(IoT)中被广泛采用为隐私保护模型。然而,它容易受到投毒攻击,现有防御措施要么带来过高的资源开销,要么依赖于特定领域的先验知识,限制了其实际部署。为解决这些局限性,我们提出了PEEL,一种面向LDP的投毒暴露编码理论框架。该框架摒弃了依赖资源或先验知识的对抗策略,转而利用LDP扰动数据固有的结构一致性。作为一个非侵入式的后处理模块,PEEL通过稀疏化、归一化和低秩投影对LDP扰动数据进行重新编码,从而放大隐蔽的投毒效应,进而在重构空间中通过结构不一致性揭示输出投毒和规则投毒攻击。理论分析证明,PEEL与LDP结合后仍能保持无偏性和统计准确性,同时对暴露输出投毒和规则投毒攻击具有鲁棒性。此外,评估结果表明,集成LDP的PEEL不仅在投毒暴露准确性上优于四种先进防御方法,还显著降低了客户端计算成本,使其非常适用于大规模物联网部署。