Sensor data collected by Internet of Things (IoT) devices carries detailed information about individuals in their vicinity. Sharing this data with a semi-trusted service provider may compromise the individuals' privacy, as sensitive information can be extracted by powerful machine learning models. Data obfuscation empowered by generative models is a promising approach to generate synthetic sensor data such that the useful information contained in the original data is preserved and the sensitive information is obscured. This newly generated data will then be shared with the service provider instead of the original sensor data. In this work, we propose PrivDiffuser, a novel data obfuscation technique based on a denoising diffusion model that attains a superior trade-off between data utility and privacy through effective guidance techniques. Specifically, we extract latent representations that contain information about public and private attributes from sensor data to guide the diffusion model, and impose mutual information-based regularization when learning the latent representations to alleviate the entanglement of public and private attributes, thereby increasing the effectiveness of guidance. Evaluation on three real-world datasets containing different sensing modalities reveals that PrivDiffuser yields a better privacy-utility trade-off than the state-of-the-art obfuscation model, decreasing the utility loss by up to $1.81\%$ and the privacy loss by up to $3.42\%$. Moreover, we showed that users with diverse privacy needs can use PrivDiffuser to protect their privacy without having to retrain the model.
翻译:物联网设备采集的传感器数据携带了其周边个体的详细信息。与半可信服务提供商共享此类数据可能危及个体隐私,因为强大的机器学习模型能够从中提取敏感信息。基于生成模型的数据混淆技术是一种有前景的方法,能够生成合成传感器数据,在保留原始数据中有用信息的同时模糊敏感信息。随后,新生成的数据将替代原始传感器数据与服务提供商共享。本研究提出PrivDiffuser——一种基于去噪扩散模型的新型数据混淆技术,通过有效的引导机制实现了数据效用与隐私保护的更优权衡。具体而言,我们从传感器数据中提取包含公共属性与隐私属性的潜在表征来引导扩散模型,并在学习潜在表征时施加基于互信息的正则化约束,以缓解公共属性与隐私属性的纠缠,从而提升引导效能。在包含不同传感模态的三个真实数据集上的评估表明,PrivDiffuser相比当前最先进的混淆模型取得了更优的隐私-效用权衡,将效用损失降低达$1.81\%$,隐私损失降低达$3.42\%$。此外,我们证明具有多样化隐私需求的用户无需重新训练模型即可使用PrivDiffuser保护其隐私。