Differential privacy quantifies privacy through the privacy budget $\epsilon$, yet its practical interpretation is complicated by variations across models and datasets. Recent research on differentially private machine learning and membership inference has highlighted that with the same theoretical $\epsilon$ setting, the likelihood-ratio-based membership inference (LiRA) attacking success rate (ASR) may vary according to specific datasets and models, which might be a better indicator for evaluating real-world privacy risks. Inspired by this practical privacy measure, we study the approaches that can lower the attacking success rate to allow for more flexible privacy budget settings in model training. We find that by selectively suppressing privacy-sensitive features, we can achieve lower ASR values without compromising application-specific data utility. We use the SHAP and LIME model explainer to evaluate feature sensitivities and develop feature-masking strategies. Our findings demonstrate that the LiRA $ASR^M$ on model $M$ can properly indicate the inherent privacy risk of a dataset for modeling, and it's possible to modify datasets to enable the use of larger theoretical $\epsilon$ settings to achieve equivalent practical privacy protection. We have conducted extensive experiments to show the inherent link between ASR and the dataset's privacy risk. By carefully selecting features to mask, we can preserve more data utility with equivalent practical privacy protection and relaxed $\epsilon$ settings. The implementation details are shared online at the provided GitHub URL \url{https://anonymous.4open.science/r/On-sensitive-features-and-empirical-epsilon-lower-bounds-BF67/}.
翻译:差分隐私通过隐私预算$\epsilon$量化隐私保护程度,但其实际解释因模型和数据集差异而复杂化。近期关于差分隐私机器学习与成员推理的研究表明,在相同理论$\epsilon$设置下,基于似然比的成员推理攻击(LiRA)成功率可能随具体数据集与模型变化,这或许能成为评估现实隐私风险的更优指标。受此实用隐私度量方法的启发,我们研究了能够降低攻击成功率以允许更灵活隐私预算设置的方法。我们发现,通过选择性抑制隐私敏感特征,可以在不损害特定应用数据效用的前提下实现更低的攻击成功率。我们采用SHAP与LIME模型解释器评估特征敏感度,并开发特征掩码策略。实验结果表明,模型$M$上的LiRA $ASR^M$能够恰当指示数据集建模的固有隐私风险,且可通过修改数据集以使用更大的理论$\epsilon$设置来实现等效的实用隐私保护。我们通过大量实验揭示了攻击成功率与数据集隐私风险的内在关联。通过精心选择掩码特征,我们能在等效实用隐私保护与放宽的$\epsilon$设置下保留更多数据效用。具体实现细节已通过GitHub链接公开:\url{https://anonymous.4open.science/r/On-sensitive-features-and-empirical-epsilon-lower-bounds-BF67/}。